00:00:00.000 Started by upstream project "autotest-nightly" build number 4250 00:00:00.000 originally caused by: 00:00:00.000 Started by upstream project "nightly-trigger" build number 3613 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.126 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.127 The recommended git tool is: git 00:00:00.127 using credential 00000000-0000-0000-0000-000000000002 00:00:00.129 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.204 Fetching changes from the remote Git repository 00:00:00.210 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.275 Using shallow fetch with depth 1 00:00:00.275 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.275 > git --version # timeout=10 00:00:00.321 > git --version # 'git version 2.39.2' 00:00:00.321 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.360 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.360 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:08.311 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:08.326 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:08.339 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:08.339 > git config core.sparsecheckout # timeout=10 00:00:08.349 > git read-tree -mu HEAD # timeout=10 00:00:08.365 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:08.383 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:08.384 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:08.495 [Pipeline] Start of Pipeline 00:00:08.511 [Pipeline] library 00:00:08.512 Loading library shm_lib@master 00:00:08.513 Library shm_lib@master is cached. Copying from home. 00:00:08.528 [Pipeline] node 00:00:23.530 Still waiting to schedule task 00:00:23.530 Waiting for next available executor on ‘vagrant-vm-host’ 00:00:35.113 Running on VM-host-SM38 in /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:35.115 [Pipeline] { 00:00:35.124 [Pipeline] catchError 00:00:35.129 [Pipeline] { 00:00:35.139 [Pipeline] wrap 00:00:35.146 [Pipeline] { 00:00:35.152 [Pipeline] stage 00:00:35.154 [Pipeline] { (Prologue) 00:00:35.170 [Pipeline] echo 00:00:35.171 Node: VM-host-SM38 00:00:35.176 [Pipeline] cleanWs 00:00:35.191 [WS-CLEANUP] Deleting project workspace... 00:00:35.191 [WS-CLEANUP] Deferred wipeout is used... 00:00:35.198 [WS-CLEANUP] done 00:00:35.405 [Pipeline] setCustomBuildProperty 00:00:35.493 [Pipeline] httpRequest 00:00:35.909 [Pipeline] echo 00:00:35.911 Sorcerer 10.211.164.101 is alive 00:00:35.920 [Pipeline] retry 00:00:35.922 [Pipeline] { 00:00:35.936 [Pipeline] httpRequest 00:00:35.941 HttpMethod: GET 00:00:35.942 URL: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:35.942 Sending request to url: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:35.959 Response Code: HTTP/1.1 200 OK 00:00:35.960 Success: Status code 200 is in the accepted range: 200,404 00:00:35.960 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:40.345 [Pipeline] } 00:00:40.363 [Pipeline] // retry 00:00:40.370 [Pipeline] sh 00:00:40.663 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:40.678 [Pipeline] httpRequest 00:00:41.125 [Pipeline] echo 00:00:41.127 Sorcerer 10.211.164.101 is alive 00:00:41.137 [Pipeline] retry 00:00:41.139 [Pipeline] { 00:00:41.153 [Pipeline] httpRequest 00:00:41.158 HttpMethod: GET 00:00:41.160 URL: http://10.211.164.101/packages/spdk_d1c46ed8e5f61500a9ef69d922f8d3f89a4e9cb3.tar.gz 00:00:41.160 Sending request to url: http://10.211.164.101/packages/spdk_d1c46ed8e5f61500a9ef69d922f8d3f89a4e9cb3.tar.gz 00:00:41.173 Response Code: HTTP/1.1 200 OK 00:00:41.173 Success: Status code 200 is in the accepted range: 200,404 00:00:41.174 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk_d1c46ed8e5f61500a9ef69d922f8d3f89a4e9cb3.tar.gz 00:01:55.017 [Pipeline] } 00:01:55.034 [Pipeline] // retry 00:01:55.041 [Pipeline] sh 00:01:55.315 + tar --no-same-owner -xf spdk_d1c46ed8e5f61500a9ef69d922f8d3f89a4e9cb3.tar.gz 00:01:59.544 [Pipeline] sh 00:01:59.820 + git -C spdk log --oneline -n5 00:01:59.820 d1c46ed8e lib/rdma_provider: Add API to check if accel seq supported 00:01:59.820 a59d7e018 lib/mlx5: Add API to check if UMR registration supported 00:01:59.820 f6925f5e4 accel/mlx5: Merge crypto+copy to reg UMR 00:01:59.820 008a6371b accel/mlx5: Initial implementation of mlx5 platform driver 00:01:59.820 cc533a3e5 nvme/nvme: Factor out submit_request function 00:01:59.836 [Pipeline] writeFile 00:01:59.851 [Pipeline] sh 00:02:00.128 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:02:00.139 [Pipeline] sh 00:02:00.417 + cat autorun-spdk.conf 00:02:00.417 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:00.417 SPDK_TEST_NVMF=1 00:02:00.417 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:00.417 SPDK_TEST_VFIOUSER=1 00:02:00.417 SPDK_TEST_USDT=1 00:02:00.417 SPDK_RUN_ASAN=1 00:02:00.417 SPDK_RUN_UBSAN=1 00:02:00.417 SPDK_TEST_NVMF_MDNS=1 00:02:00.417 NET_TYPE=virt 00:02:00.417 SPDK_JSONRPC_GO_CLIENT=1 00:02:00.417 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:00.422 RUN_NIGHTLY=1 00:02:00.424 [Pipeline] } 00:02:00.436 [Pipeline] // stage 00:02:00.451 [Pipeline] stage 00:02:00.453 [Pipeline] { (Run VM) 00:02:00.466 [Pipeline] sh 00:02:00.744 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:02:00.744 + echo 'Start stage prepare_nvme.sh' 00:02:00.744 Start stage prepare_nvme.sh 00:02:00.744 + [[ -n 6 ]] 00:02:00.744 + disk_prefix=ex6 00:02:00.744 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest ]] 00:02:00.744 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf ]] 00:02:00.744 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf 00:02:00.744 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:00.744 ++ SPDK_TEST_NVMF=1 00:02:00.744 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:00.744 ++ SPDK_TEST_VFIOUSER=1 00:02:00.744 ++ SPDK_TEST_USDT=1 00:02:00.744 ++ SPDK_RUN_ASAN=1 00:02:00.744 ++ SPDK_RUN_UBSAN=1 00:02:00.744 ++ SPDK_TEST_NVMF_MDNS=1 00:02:00.744 ++ NET_TYPE=virt 00:02:00.744 ++ SPDK_JSONRPC_GO_CLIENT=1 00:02:00.744 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:00.744 ++ RUN_NIGHTLY=1 00:02:00.744 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:02:00.744 + nvme_files=() 00:02:00.744 + declare -A nvme_files 00:02:00.744 + backend_dir=/var/lib/libvirt/images/backends 00:02:00.744 + nvme_files['nvme.img']=5G 00:02:00.744 + nvme_files['nvme-cmb.img']=5G 00:02:00.744 + nvme_files['nvme-multi0.img']=4G 00:02:00.744 + nvme_files['nvme-multi1.img']=4G 00:02:00.744 + nvme_files['nvme-multi2.img']=4G 00:02:00.744 + nvme_files['nvme-openstack.img']=8G 00:02:00.744 + nvme_files['nvme-zns.img']=5G 00:02:00.744 + (( SPDK_TEST_NVME_PMR == 1 )) 00:02:00.744 + (( SPDK_TEST_FTL == 1 )) 00:02:00.744 + (( SPDK_TEST_NVME_FDP == 1 )) 00:02:00.744 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:02:00.744 + for nvme in "${!nvme_files[@]}" 00:02:00.744 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi2.img -s 4G 00:02:00.744 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:02:00.744 + for nvme in "${!nvme_files[@]}" 00:02:00.744 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-cmb.img -s 5G 00:02:00.744 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:02:00.744 + for nvme in "${!nvme_files[@]}" 00:02:00.744 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-openstack.img -s 8G 00:02:00.744 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:02:00.744 + for nvme in "${!nvme_files[@]}" 00:02:00.744 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-zns.img -s 5G 00:02:00.744 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:02:00.744 + for nvme in "${!nvme_files[@]}" 00:02:00.744 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi1.img -s 4G 00:02:00.744 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:02:00.744 + for nvme in "${!nvme_files[@]}" 00:02:00.744 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi0.img -s 4G 00:02:00.744 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:02:00.744 + for nvme in "${!nvme_files[@]}" 00:02:00.744 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme.img -s 5G 00:02:00.744 Formatting '/var/lib/libvirt/images/backends/ex6-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:02:00.744 ++ sudo grep -rl ex6-nvme.img /etc/libvirt/qemu 00:02:00.744 + echo 'End stage prepare_nvme.sh' 00:02:00.744 End stage prepare_nvme.sh 00:02:00.754 [Pipeline] sh 00:02:01.031 + DISTRO=fedora39 00:02:01.031 + CPUS=10 00:02:01.031 + RAM=12288 00:02:01.031 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:02:01.031 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex6-nvme.img -b /var/lib/libvirt/images/backends/ex6-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img -H -a -v -f fedora39 00:02:01.031 00:02:01.031 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant 00:02:01.031 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk 00:02:01.031 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest 00:02:01.031 HELP=0 00:02:01.031 DRY_RUN=0 00:02:01.031 NVME_FILE=/var/lib/libvirt/images/backends/ex6-nvme.img,/var/lib/libvirt/images/backends/ex6-nvme-multi0.img, 00:02:01.031 NVME_DISKS_TYPE=nvme,nvme, 00:02:01.031 NVME_AUTO_CREATE=0 00:02:01.031 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img, 00:02:01.031 NVME_CMB=,, 00:02:01.031 NVME_PMR=,, 00:02:01.031 NVME_ZNS=,, 00:02:01.031 NVME_MS=,, 00:02:01.031 NVME_FDP=,, 00:02:01.031 SPDK_VAGRANT_DISTRO=fedora39 00:02:01.031 SPDK_VAGRANT_VMCPU=10 00:02:01.031 SPDK_VAGRANT_VMRAM=12288 00:02:01.031 SPDK_VAGRANT_PROVIDER=libvirt 00:02:01.031 SPDK_VAGRANT_HTTP_PROXY= 00:02:01.031 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:02:01.031 SPDK_OPENSTACK_NETWORK=0 00:02:01.031 VAGRANT_PACKAGE_BOX=0 00:02:01.031 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:02:01.031 FORCE_DISTRO=true 00:02:01.031 VAGRANT_BOX_VERSION= 00:02:01.031 EXTRA_VAGRANTFILES= 00:02:01.031 NIC_MODEL=e1000 00:02:01.031 00:02:01.031 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt' 00:02:01.031 /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:02:03.555 Bringing machine 'default' up with 'libvirt' provider... 00:02:03.812 ==> default: Creating image (snapshot of base box volume). 00:02:10.439 ==> default: Creating domain with the following settings... 00:02:10.439 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1730901991_3c108d94ec198e6a1f08 00:02:10.439 ==> default: -- Domain type: kvm 00:02:10.439 ==> default: -- Cpus: 10 00:02:10.439 ==> default: -- Feature: acpi 00:02:10.439 ==> default: -- Feature: apic 00:02:10.439 ==> default: -- Feature: pae 00:02:10.439 ==> default: -- Memory: 12288M 00:02:10.439 ==> default: -- Memory Backing: hugepages: 00:02:10.439 ==> default: -- Management MAC: 00:02:10.439 ==> default: -- Loader: 00:02:10.439 ==> default: -- Nvram: 00:02:10.439 ==> default: -- Base box: spdk/fedora39 00:02:10.439 ==> default: -- Storage pool: default 00:02:10.439 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1730901991_3c108d94ec198e6a1f08.img (20G) 00:02:10.439 ==> default: -- Volume Cache: default 00:02:10.439 ==> default: -- Kernel: 00:02:10.439 ==> default: -- Initrd: 00:02:10.439 ==> default: -- Graphics Type: vnc 00:02:10.439 ==> default: -- Graphics Port: -1 00:02:10.439 ==> default: -- Graphics IP: 127.0.0.1 00:02:10.439 ==> default: -- Graphics Password: Not defined 00:02:10.439 ==> default: -- Video Type: cirrus 00:02:10.439 ==> default: -- Video VRAM: 9216 00:02:10.439 ==> default: -- Sound Type: 00:02:10.439 ==> default: -- Keymap: en-us 00:02:10.439 ==> default: -- TPM Path: 00:02:10.439 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:10.439 ==> default: -- Command line args: 00:02:10.439 ==> default: -> value=-device, 00:02:10.439 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:10.439 ==> default: -> value=-drive, 00:02:10.439 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme.img,if=none,id=nvme-0-drive0, 00:02:10.439 ==> default: -> value=-device, 00:02:10.439 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:10.439 ==> default: -> value=-device, 00:02:10.439 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:10.439 ==> default: -> value=-drive, 00:02:10.439 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:02:10.439 ==> default: -> value=-device, 00:02:10.439 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:10.439 ==> default: -> value=-drive, 00:02:10.439 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:02:10.439 ==> default: -> value=-device, 00:02:10.440 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:10.440 ==> default: -> value=-drive, 00:02:10.440 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:02:10.440 ==> default: -> value=-device, 00:02:10.440 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:10.440 ==> default: Creating shared folders metadata... 00:02:10.440 ==> default: Starting domain. 00:02:11.371 ==> default: Waiting for domain to get an IP address... 00:02:26.234 ==> default: Waiting for SSH to become available... 00:02:26.234 ==> default: Configuring and enabling network interfaces... 00:02:29.526 default: SSH address: 192.168.121.161:22 00:02:29.526 default: SSH username: vagrant 00:02:29.526 default: SSH auth method: private key 00:02:31.425 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:37.975 ==> default: Mounting SSHFS shared folder... 00:02:39.871 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:39.871 ==> default: Checking Mount.. 00:02:40.802 ==> default: Folder Successfully Mounted! 00:02:40.802 00:02:40.802 SUCCESS! 00:02:40.802 00:02:40.802 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:40.802 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:40.802 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:40.802 00:02:40.809 [Pipeline] } 00:02:40.824 [Pipeline] // stage 00:02:40.833 [Pipeline] dir 00:02:40.833 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt 00:02:40.835 [Pipeline] { 00:02:40.848 [Pipeline] catchError 00:02:40.849 [Pipeline] { 00:02:40.862 [Pipeline] sh 00:02:41.137 + vagrant ssh-config --host vagrant 00:02:41.137 + sed -ne '/^Host/,$p' 00:02:41.137 + tee ssh_conf 00:02:43.673 Host vagrant 00:02:43.673 HostName 192.168.121.161 00:02:43.673 User vagrant 00:02:43.673 Port 22 00:02:43.673 UserKnownHostsFile /dev/null 00:02:43.673 StrictHostKeyChecking no 00:02:43.673 PasswordAuthentication no 00:02:43.673 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:43.673 IdentitiesOnly yes 00:02:43.673 LogLevel FATAL 00:02:43.673 ForwardAgent yes 00:02:43.673 ForwardX11 yes 00:02:43.673 00:02:43.685 [Pipeline] withEnv 00:02:43.687 [Pipeline] { 00:02:43.700 [Pipeline] sh 00:02:43.977 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:02:43.977 source /etc/os-release 00:02:43.977 [[ -e /image.version ]] && img=$(< /image.version) 00:02:43.977 # Minimal, systemd-like check. 00:02:43.977 if [[ -e /.dockerenv ]]; then 00:02:43.977 # Clear garbage from the node'\''s name: 00:02:43.977 # agt-er_autotest_547-896 -> autotest_547-896 00:02:43.977 # $HOSTNAME is the actual container id 00:02:43.977 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:43.977 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:43.977 # We can assume this is a mount from a host where container is running, 00:02:43.977 # so fetch its hostname to easily identify the target swarm worker. 00:02:43.977 container="$(< /etc/hostname) ($agent)" 00:02:43.977 else 00:02:43.977 # Fallback 00:02:43.977 container=$agent 00:02:43.977 fi 00:02:43.977 fi 00:02:43.977 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:43.977 ' 00:02:43.987 [Pipeline] } 00:02:44.003 [Pipeline] // withEnv 00:02:44.012 [Pipeline] setCustomBuildProperty 00:02:44.026 [Pipeline] stage 00:02:44.029 [Pipeline] { (Tests) 00:02:44.047 [Pipeline] sh 00:02:44.361 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:44.376 [Pipeline] sh 00:02:44.652 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:44.921 [Pipeline] timeout 00:02:44.922 Timeout set to expire in 1 hr 0 min 00:02:44.923 [Pipeline] { 00:02:44.937 [Pipeline] sh 00:02:45.212 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:45.468 HEAD is now at d1c46ed8e lib/rdma_provider: Add API to check if accel seq supported 00:02:45.478 [Pipeline] sh 00:02:45.751 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:46.023 [Pipeline] sh 00:02:46.302 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:46.320 [Pipeline] sh 00:02:46.595 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo' 00:02:46.595 ++ readlink -f spdk_repo 00:02:46.595 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:46.595 + [[ -n /home/vagrant/spdk_repo ]] 00:02:46.595 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:46.595 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:46.595 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:46.595 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:46.595 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:46.595 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:02:46.595 + cd /home/vagrant/spdk_repo 00:02:46.595 + source /etc/os-release 00:02:46.595 ++ NAME='Fedora Linux' 00:02:46.595 ++ VERSION='39 (Cloud Edition)' 00:02:46.595 ++ ID=fedora 00:02:46.595 ++ VERSION_ID=39 00:02:46.595 ++ VERSION_CODENAME= 00:02:46.595 ++ PLATFORM_ID=platform:f39 00:02:46.595 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:46.595 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:46.595 ++ LOGO=fedora-logo-icon 00:02:46.595 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:46.595 ++ HOME_URL=https://fedoraproject.org/ 00:02:46.595 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:46.595 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:46.595 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:46.595 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:46.595 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:46.595 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:46.595 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:46.595 ++ SUPPORT_END=2024-11-12 00:02:46.595 ++ VARIANT='Cloud Edition' 00:02:46.595 ++ VARIANT_ID=cloud 00:02:46.595 + uname -a 00:02:46.595 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:46.595 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:47.159 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:47.159 Hugepages 00:02:47.159 node hugesize free / total 00:02:47.159 node0 1048576kB 0 / 0 00:02:47.159 node0 2048kB 0 / 0 00:02:47.159 00:02:47.159 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:47.159 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:47.159 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:47.159 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:47.159 + rm -f /tmp/spdk-ld-path 00:02:47.159 + source autorun-spdk.conf 00:02:47.159 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:47.159 ++ SPDK_TEST_NVMF=1 00:02:47.159 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:47.159 ++ SPDK_TEST_VFIOUSER=1 00:02:47.159 ++ SPDK_TEST_USDT=1 00:02:47.159 ++ SPDK_RUN_ASAN=1 00:02:47.159 ++ SPDK_RUN_UBSAN=1 00:02:47.159 ++ SPDK_TEST_NVMF_MDNS=1 00:02:47.159 ++ NET_TYPE=virt 00:02:47.159 ++ SPDK_JSONRPC_GO_CLIENT=1 00:02:47.159 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:47.159 ++ RUN_NIGHTLY=1 00:02:47.159 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:47.159 + [[ -n '' ]] 00:02:47.159 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:47.159 + for M in /var/spdk/build-*-manifest.txt 00:02:47.159 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:47.159 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:47.159 + for M in /var/spdk/build-*-manifest.txt 00:02:47.159 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:47.160 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:47.160 + for M in /var/spdk/build-*-manifest.txt 00:02:47.160 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:47.160 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:47.160 ++ uname 00:02:47.160 + [[ Linux == \L\i\n\u\x ]] 00:02:47.160 + sudo dmesg -T 00:02:47.160 + sudo dmesg --clear 00:02:47.160 + dmesg_pid=4988 00:02:47.160 + [[ Fedora Linux == FreeBSD ]] 00:02:47.160 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:47.160 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:47.160 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:47.160 + sudo dmesg -Tw 00:02:47.160 + [[ -x /usr/src/fio-static/fio ]] 00:02:47.160 + export FIO_BIN=/usr/src/fio-static/fio 00:02:47.160 + FIO_BIN=/usr/src/fio-static/fio 00:02:47.160 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:47.160 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:47.160 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:47.160 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:47.160 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:47.160 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:47.160 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:47.160 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:47.160 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:47.160 14:07:14 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:02:47.160 14:07:14 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:47.160 14:07:14 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:47.160 14:07:14 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:02:47.160 14:07:14 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:47.160 14:07:14 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_VFIOUSER=1 00:02:47.160 14:07:14 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_USDT=1 00:02:47.160 14:07:14 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_ASAN=1 00:02:47.160 14:07:14 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:02:47.160 14:07:14 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVMF_MDNS=1 00:02:47.160 14:07:14 -- spdk_repo/autorun-spdk.conf@9 -- $ NET_TYPE=virt 00:02:47.160 14:07:14 -- spdk_repo/autorun-spdk.conf@10 -- $ SPDK_JSONRPC_GO_CLIENT=1 00:02:47.160 14:07:14 -- spdk_repo/autorun-spdk.conf@11 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:47.160 14:07:14 -- spdk_repo/autorun-spdk.conf@12 -- $ RUN_NIGHTLY=1 00:02:47.160 14:07:14 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:47.160 14:07:14 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:47.160 14:07:14 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:02:47.160 14:07:14 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:47.160 14:07:14 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:47.160 14:07:14 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:47.160 14:07:14 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:47.160 14:07:14 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:47.160 14:07:14 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:47.160 14:07:14 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:47.160 14:07:14 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:47.160 14:07:14 -- paths/export.sh@5 -- $ export PATH 00:02:47.160 14:07:14 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:47.160 14:07:14 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:47.160 14:07:14 -- common/autobuild_common.sh@486 -- $ date +%s 00:02:47.160 14:07:14 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1730902034.XXXXXX 00:02:47.160 14:07:14 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1730902034.yA7ojX 00:02:47.160 14:07:14 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:02:47.160 14:07:14 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:02:47.160 14:07:14 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:47.160 14:07:14 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:47.160 14:07:14 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:47.160 14:07:14 -- common/autobuild_common.sh@502 -- $ get_config_params 00:02:47.160 14:07:14 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:47.160 14:07:14 -- common/autotest_common.sh@10 -- $ set +x 00:02:47.160 14:07:14 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-vfio-user --with-avahi --with-golang' 00:02:47.160 14:07:14 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:02:47.160 14:07:14 -- pm/common@17 -- $ local monitor 00:02:47.160 14:07:14 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:47.160 14:07:14 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:47.160 14:07:14 -- pm/common@25 -- $ sleep 1 00:02:47.160 14:07:14 -- pm/common@21 -- $ date +%s 00:02:47.160 14:07:14 -- pm/common@21 -- $ date +%s 00:02:47.160 14:07:14 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1730902034 00:02:47.418 14:07:14 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1730902034 00:02:47.418 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1730902034_collect-cpu-load.pm.log 00:02:47.418 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1730902034_collect-vmstat.pm.log 00:02:48.350 14:07:15 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:02:48.350 14:07:15 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:48.350 14:07:15 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:48.350 14:07:15 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:48.350 14:07:15 -- spdk/autobuild.sh@16 -- $ date -u 00:02:48.350 Wed Nov 6 02:07:15 PM UTC 2024 00:02:48.350 14:07:15 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:48.350 v25.01-pre-170-gd1c46ed8e 00:02:48.350 14:07:15 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:48.350 14:07:15 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:48.350 14:07:15 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:02:48.350 14:07:15 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:02:48.350 14:07:15 -- common/autotest_common.sh@10 -- $ set +x 00:02:48.350 ************************************ 00:02:48.350 START TEST asan 00:02:48.350 ************************************ 00:02:48.350 using asan 00:02:48.350 14:07:15 asan -- common/autotest_common.sh@1127 -- $ echo 'using asan' 00:02:48.350 00:02:48.350 real 0m0.000s 00:02:48.350 user 0m0.000s 00:02:48.350 sys 0m0.000s 00:02:48.351 14:07:15 asan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:02:48.351 14:07:15 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:48.351 ************************************ 00:02:48.351 END TEST asan 00:02:48.351 ************************************ 00:02:48.351 14:07:15 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:48.351 14:07:15 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:48.351 14:07:15 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:02:48.351 14:07:15 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:02:48.351 14:07:15 -- common/autotest_common.sh@10 -- $ set +x 00:02:48.351 ************************************ 00:02:48.351 START TEST ubsan 00:02:48.351 ************************************ 00:02:48.351 using ubsan 00:02:48.351 14:07:15 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:02:48.351 00:02:48.351 real 0m0.000s 00:02:48.351 user 0m0.000s 00:02:48.351 sys 0m0.000s 00:02:48.351 14:07:15 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:02:48.351 ************************************ 00:02:48.351 END TEST ubsan 00:02:48.351 ************************************ 00:02:48.351 14:07:15 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:48.351 14:07:15 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:48.351 14:07:15 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:48.351 14:07:15 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:48.351 14:07:15 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:48.351 14:07:15 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:48.351 14:07:15 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:48.351 14:07:15 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:48.351 14:07:15 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:48.351 14:07:15 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-vfio-user --with-avahi --with-golang --with-shared 00:02:48.609 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:48.609 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:48.869 Using 'verbs' RDMA provider 00:03:01.649 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:11.670 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:11.670 go version go1.21.1 linux/amd64 00:03:11.670 Creating mk/config.mk...done. 00:03:11.670 Creating mk/cc.flags.mk...done. 00:03:11.670 Type 'make' to build. 00:03:11.670 14:07:38 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:11.670 14:07:38 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:03:11.670 14:07:38 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:03:11.670 14:07:38 -- common/autotest_common.sh@10 -- $ set +x 00:03:11.670 ************************************ 00:03:11.670 START TEST make 00:03:11.670 ************************************ 00:03:11.670 14:07:38 make -- common/autotest_common.sh@1127 -- $ make -j10 00:03:11.670 make[1]: Nothing to be done for 'all'. 00:03:12.612 The Meson build system 00:03:12.612 Version: 1.5.0 00:03:12.612 Source dir: /home/vagrant/spdk_repo/spdk/libvfio-user 00:03:12.612 Build dir: /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:03:12.612 Build type: native build 00:03:12.612 Project name: libvfio-user 00:03:12.612 Project version: 0.0.1 00:03:12.612 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:12.612 C linker for the host machine: cc ld.bfd 2.40-14 00:03:12.612 Host machine cpu family: x86_64 00:03:12.612 Host machine cpu: x86_64 00:03:12.612 Run-time dependency threads found: YES 00:03:12.612 Library dl found: YES 00:03:12.612 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:12.612 Run-time dependency json-c found: YES 0.17 00:03:12.612 Run-time dependency cmocka found: YES 1.1.7 00:03:12.612 Program pytest-3 found: NO 00:03:12.612 Program flake8 found: NO 00:03:12.612 Program misspell-fixer found: NO 00:03:12.612 Program restructuredtext-lint found: NO 00:03:12.612 Program valgrind found: YES (/usr/bin/valgrind) 00:03:12.612 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:12.612 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:12.612 Compiler for C supports arguments -Wwrite-strings: YES 00:03:12.612 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:12.612 Program test-lspci.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-lspci.sh) 00:03:12.612 Program test-linkage.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-linkage.sh) 00:03:12.612 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:12.612 Build targets in project: 8 00:03:12.612 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:12.612 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:12.612 00:03:12.612 libvfio-user 0.0.1 00:03:12.612 00:03:12.612 User defined options 00:03:12.612 buildtype : debug 00:03:12.612 default_library: shared 00:03:12.612 libdir : /usr/local/lib 00:03:12.612 00:03:12.612 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:13.184 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:03:13.184 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:13.184 [2/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:13.184 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:13.184 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:13.184 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:13.184 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:13.184 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:13.184 [8/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:13.184 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:13.184 [10/37] Compiling C object samples/null.p/null.c.o 00:03:13.184 [11/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:13.184 [12/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:13.471 [13/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:13.471 [14/37] Compiling C object samples/client.p/client.c.o 00:03:13.471 [15/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:13.471 [16/37] Linking target samples/client 00:03:13.471 [17/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:13.471 [18/37] Compiling C object samples/server.p/server.c.o 00:03:13.471 [19/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:13.471 [20/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:13.471 [21/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:13.471 [22/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:13.471 [23/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:13.471 [24/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:13.471 [25/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:13.471 [26/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:13.471 [27/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:13.471 [28/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:13.471 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:03:13.732 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:13.732 [31/37] Linking target test/unit_tests 00:03:13.732 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:13.732 [33/37] Linking target samples/server 00:03:13.732 [34/37] Linking target samples/null 00:03:13.732 [35/37] Linking target samples/lspci 00:03:13.732 [36/37] Linking target samples/gpio-pci-idio-16 00:03:13.732 [37/37] Linking target samples/shadow_ioeventfd_server 00:03:13.732 INFO: autodetecting backend as ninja 00:03:13.732 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:03:13.992 DESTDIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user meson install --quiet -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:03:14.253 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:03:14.253 ninja: no work to do. 00:03:22.447 The Meson build system 00:03:22.447 Version: 1.5.0 00:03:22.447 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:03:22.447 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:22.447 Build type: native build 00:03:22.447 Program cat found: YES (/usr/bin/cat) 00:03:22.447 Project name: DPDK 00:03:22.447 Project version: 24.03.0 00:03:22.447 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:22.447 C linker for the host machine: cc ld.bfd 2.40-14 00:03:22.447 Host machine cpu family: x86_64 00:03:22.447 Host machine cpu: x86_64 00:03:22.447 Message: ## Building in Developer Mode ## 00:03:22.447 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:22.447 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:03:22.447 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:22.447 Program python3 found: YES (/usr/bin/python3) 00:03:22.447 Program cat found: YES (/usr/bin/cat) 00:03:22.447 Compiler for C supports arguments -march=native: YES 00:03:22.447 Checking for size of "void *" : 8 00:03:22.447 Checking for size of "void *" : 8 (cached) 00:03:22.447 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:22.447 Library m found: YES 00:03:22.447 Library numa found: YES 00:03:22.447 Has header "numaif.h" : YES 00:03:22.447 Library fdt found: NO 00:03:22.447 Library execinfo found: NO 00:03:22.447 Has header "execinfo.h" : YES 00:03:22.447 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:22.447 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:22.447 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:22.447 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:22.447 Run-time dependency openssl found: YES 3.1.1 00:03:22.447 Run-time dependency libpcap found: YES 1.10.4 00:03:22.447 Has header "pcap.h" with dependency libpcap: YES 00:03:22.447 Compiler for C supports arguments -Wcast-qual: YES 00:03:22.447 Compiler for C supports arguments -Wdeprecated: YES 00:03:22.447 Compiler for C supports arguments -Wformat: YES 00:03:22.447 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:22.447 Compiler for C supports arguments -Wformat-security: NO 00:03:22.447 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:22.447 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:22.447 Compiler for C supports arguments -Wnested-externs: YES 00:03:22.447 Compiler for C supports arguments -Wold-style-definition: YES 00:03:22.447 Compiler for C supports arguments -Wpointer-arith: YES 00:03:22.447 Compiler for C supports arguments -Wsign-compare: YES 00:03:22.447 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:22.447 Compiler for C supports arguments -Wundef: YES 00:03:22.447 Compiler for C supports arguments -Wwrite-strings: YES 00:03:22.447 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:22.447 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:22.447 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:22.447 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:22.447 Program objdump found: YES (/usr/bin/objdump) 00:03:22.447 Compiler for C supports arguments -mavx512f: YES 00:03:22.447 Checking if "AVX512 checking" compiles: YES 00:03:22.447 Fetching value of define "__SSE4_2__" : 1 00:03:22.447 Fetching value of define "__AES__" : 1 00:03:22.447 Fetching value of define "__AVX__" : 1 00:03:22.447 Fetching value of define "__AVX2__" : 1 00:03:22.447 Fetching value of define "__AVX512BW__" : 1 00:03:22.447 Fetching value of define "__AVX512CD__" : 1 00:03:22.447 Fetching value of define "__AVX512DQ__" : 1 00:03:22.447 Fetching value of define "__AVX512F__" : 1 00:03:22.447 Fetching value of define "__AVX512VL__" : 1 00:03:22.447 Fetching value of define "__PCLMUL__" : 1 00:03:22.447 Fetching value of define "__RDRND__" : 1 00:03:22.447 Fetching value of define "__RDSEED__" : 1 00:03:22.447 Fetching value of define "__VPCLMULQDQ__" : 1 00:03:22.447 Fetching value of define "__znver1__" : (undefined) 00:03:22.447 Fetching value of define "__znver2__" : (undefined) 00:03:22.447 Fetching value of define "__znver3__" : (undefined) 00:03:22.447 Fetching value of define "__znver4__" : (undefined) 00:03:22.447 Library asan found: YES 00:03:22.447 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:22.447 Message: lib/log: Defining dependency "log" 00:03:22.447 Message: lib/kvargs: Defining dependency "kvargs" 00:03:22.447 Message: lib/telemetry: Defining dependency "telemetry" 00:03:22.447 Library rt found: YES 00:03:22.447 Checking for function "getentropy" : NO 00:03:22.447 Message: lib/eal: Defining dependency "eal" 00:03:22.447 Message: lib/ring: Defining dependency "ring" 00:03:22.447 Message: lib/rcu: Defining dependency "rcu" 00:03:22.447 Message: lib/mempool: Defining dependency "mempool" 00:03:22.447 Message: lib/mbuf: Defining dependency "mbuf" 00:03:22.447 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:22.447 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:22.447 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:22.447 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:22.447 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:22.447 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:03:22.447 Compiler for C supports arguments -mpclmul: YES 00:03:22.447 Compiler for C supports arguments -maes: YES 00:03:22.447 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:22.447 Compiler for C supports arguments -mavx512bw: YES 00:03:22.447 Compiler for C supports arguments -mavx512dq: YES 00:03:22.447 Compiler for C supports arguments -mavx512vl: YES 00:03:22.447 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:22.447 Compiler for C supports arguments -mavx2: YES 00:03:22.447 Compiler for C supports arguments -mavx: YES 00:03:22.447 Message: lib/net: Defining dependency "net" 00:03:22.447 Message: lib/meter: Defining dependency "meter" 00:03:22.447 Message: lib/ethdev: Defining dependency "ethdev" 00:03:22.447 Message: lib/pci: Defining dependency "pci" 00:03:22.447 Message: lib/cmdline: Defining dependency "cmdline" 00:03:22.447 Message: lib/hash: Defining dependency "hash" 00:03:22.447 Message: lib/timer: Defining dependency "timer" 00:03:22.447 Message: lib/compressdev: Defining dependency "compressdev" 00:03:22.447 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:22.447 Message: lib/dmadev: Defining dependency "dmadev" 00:03:22.448 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:22.448 Message: lib/power: Defining dependency "power" 00:03:22.448 Message: lib/reorder: Defining dependency "reorder" 00:03:22.448 Message: lib/security: Defining dependency "security" 00:03:22.448 Has header "linux/userfaultfd.h" : YES 00:03:22.448 Has header "linux/vduse.h" : YES 00:03:22.448 Message: lib/vhost: Defining dependency "vhost" 00:03:22.448 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:22.448 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:22.448 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:22.448 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:22.448 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:22.448 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:22.448 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:22.448 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:22.448 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:22.448 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:22.448 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:22.448 Configuring doxy-api-html.conf using configuration 00:03:22.448 Configuring doxy-api-man.conf using configuration 00:03:22.448 Program mandb found: YES (/usr/bin/mandb) 00:03:22.448 Program sphinx-build found: NO 00:03:22.448 Configuring rte_build_config.h using configuration 00:03:22.448 Message: 00:03:22.448 ================= 00:03:22.448 Applications Enabled 00:03:22.448 ================= 00:03:22.448 00:03:22.448 apps: 00:03:22.448 00:03:22.448 00:03:22.448 Message: 00:03:22.448 ================= 00:03:22.448 Libraries Enabled 00:03:22.448 ================= 00:03:22.448 00:03:22.448 libs: 00:03:22.448 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:22.448 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:22.448 cryptodev, dmadev, power, reorder, security, vhost, 00:03:22.448 00:03:22.448 Message: 00:03:22.448 =============== 00:03:22.448 Drivers Enabled 00:03:22.448 =============== 00:03:22.448 00:03:22.448 common: 00:03:22.448 00:03:22.448 bus: 00:03:22.448 pci, vdev, 00:03:22.448 mempool: 00:03:22.448 ring, 00:03:22.448 dma: 00:03:22.448 00:03:22.448 net: 00:03:22.448 00:03:22.448 crypto: 00:03:22.448 00:03:22.448 compress: 00:03:22.448 00:03:22.448 vdpa: 00:03:22.448 00:03:22.448 00:03:22.448 Message: 00:03:22.448 ================= 00:03:22.448 Content Skipped 00:03:22.448 ================= 00:03:22.448 00:03:22.448 apps: 00:03:22.448 dumpcap: explicitly disabled via build config 00:03:22.448 graph: explicitly disabled via build config 00:03:22.448 pdump: explicitly disabled via build config 00:03:22.448 proc-info: explicitly disabled via build config 00:03:22.448 test-acl: explicitly disabled via build config 00:03:22.448 test-bbdev: explicitly disabled via build config 00:03:22.448 test-cmdline: explicitly disabled via build config 00:03:22.448 test-compress-perf: explicitly disabled via build config 00:03:22.448 test-crypto-perf: explicitly disabled via build config 00:03:22.448 test-dma-perf: explicitly disabled via build config 00:03:22.448 test-eventdev: explicitly disabled via build config 00:03:22.448 test-fib: explicitly disabled via build config 00:03:22.448 test-flow-perf: explicitly disabled via build config 00:03:22.448 test-gpudev: explicitly disabled via build config 00:03:22.448 test-mldev: explicitly disabled via build config 00:03:22.448 test-pipeline: explicitly disabled via build config 00:03:22.448 test-pmd: explicitly disabled via build config 00:03:22.448 test-regex: explicitly disabled via build config 00:03:22.448 test-sad: explicitly disabled via build config 00:03:22.448 test-security-perf: explicitly disabled via build config 00:03:22.448 00:03:22.448 libs: 00:03:22.448 argparse: explicitly disabled via build config 00:03:22.448 metrics: explicitly disabled via build config 00:03:22.448 acl: explicitly disabled via build config 00:03:22.448 bbdev: explicitly disabled via build config 00:03:22.448 bitratestats: explicitly disabled via build config 00:03:22.448 bpf: explicitly disabled via build config 00:03:22.448 cfgfile: explicitly disabled via build config 00:03:22.448 distributor: explicitly disabled via build config 00:03:22.448 efd: explicitly disabled via build config 00:03:22.448 eventdev: explicitly disabled via build config 00:03:22.448 dispatcher: explicitly disabled via build config 00:03:22.448 gpudev: explicitly disabled via build config 00:03:22.448 gro: explicitly disabled via build config 00:03:22.448 gso: explicitly disabled via build config 00:03:22.448 ip_frag: explicitly disabled via build config 00:03:22.448 jobstats: explicitly disabled via build config 00:03:22.448 latencystats: explicitly disabled via build config 00:03:22.448 lpm: explicitly disabled via build config 00:03:22.448 member: explicitly disabled via build config 00:03:22.448 pcapng: explicitly disabled via build config 00:03:22.448 rawdev: explicitly disabled via build config 00:03:22.448 regexdev: explicitly disabled via build config 00:03:22.448 mldev: explicitly disabled via build config 00:03:22.448 rib: explicitly disabled via build config 00:03:22.448 sched: explicitly disabled via build config 00:03:22.448 stack: explicitly disabled via build config 00:03:22.448 ipsec: explicitly disabled via build config 00:03:22.448 pdcp: explicitly disabled via build config 00:03:22.448 fib: explicitly disabled via build config 00:03:22.448 port: explicitly disabled via build config 00:03:22.448 pdump: explicitly disabled via build config 00:03:22.448 table: explicitly disabled via build config 00:03:22.448 pipeline: explicitly disabled via build config 00:03:22.448 graph: explicitly disabled via build config 00:03:22.448 node: explicitly disabled via build config 00:03:22.448 00:03:22.448 drivers: 00:03:22.448 common/cpt: not in enabled drivers build config 00:03:22.448 common/dpaax: not in enabled drivers build config 00:03:22.448 common/iavf: not in enabled drivers build config 00:03:22.448 common/idpf: not in enabled drivers build config 00:03:22.448 common/ionic: not in enabled drivers build config 00:03:22.448 common/mvep: not in enabled drivers build config 00:03:22.448 common/octeontx: not in enabled drivers build config 00:03:22.448 bus/auxiliary: not in enabled drivers build config 00:03:22.448 bus/cdx: not in enabled drivers build config 00:03:22.449 bus/dpaa: not in enabled drivers build config 00:03:22.449 bus/fslmc: not in enabled drivers build config 00:03:22.449 bus/ifpga: not in enabled drivers build config 00:03:22.449 bus/platform: not in enabled drivers build config 00:03:22.449 bus/uacce: not in enabled drivers build config 00:03:22.449 bus/vmbus: not in enabled drivers build config 00:03:22.449 common/cnxk: not in enabled drivers build config 00:03:22.449 common/mlx5: not in enabled drivers build config 00:03:22.449 common/nfp: not in enabled drivers build config 00:03:22.449 common/nitrox: not in enabled drivers build config 00:03:22.449 common/qat: not in enabled drivers build config 00:03:22.449 common/sfc_efx: not in enabled drivers build config 00:03:22.449 mempool/bucket: not in enabled drivers build config 00:03:22.449 mempool/cnxk: not in enabled drivers build config 00:03:22.449 mempool/dpaa: not in enabled drivers build config 00:03:22.449 mempool/dpaa2: not in enabled drivers build config 00:03:22.449 mempool/octeontx: not in enabled drivers build config 00:03:22.449 mempool/stack: not in enabled drivers build config 00:03:22.449 dma/cnxk: not in enabled drivers build config 00:03:22.449 dma/dpaa: not in enabled drivers build config 00:03:22.449 dma/dpaa2: not in enabled drivers build config 00:03:22.449 dma/hisilicon: not in enabled drivers build config 00:03:22.449 dma/idxd: not in enabled drivers build config 00:03:22.449 dma/ioat: not in enabled drivers build config 00:03:22.449 dma/skeleton: not in enabled drivers build config 00:03:22.449 net/af_packet: not in enabled drivers build config 00:03:22.449 net/af_xdp: not in enabled drivers build config 00:03:22.449 net/ark: not in enabled drivers build config 00:03:22.449 net/atlantic: not in enabled drivers build config 00:03:22.449 net/avp: not in enabled drivers build config 00:03:22.449 net/axgbe: not in enabled drivers build config 00:03:22.449 net/bnx2x: not in enabled drivers build config 00:03:22.449 net/bnxt: not in enabled drivers build config 00:03:22.449 net/bonding: not in enabled drivers build config 00:03:22.449 net/cnxk: not in enabled drivers build config 00:03:22.449 net/cpfl: not in enabled drivers build config 00:03:22.449 net/cxgbe: not in enabled drivers build config 00:03:22.449 net/dpaa: not in enabled drivers build config 00:03:22.449 net/dpaa2: not in enabled drivers build config 00:03:22.449 net/e1000: not in enabled drivers build config 00:03:22.449 net/ena: not in enabled drivers build config 00:03:22.449 net/enetc: not in enabled drivers build config 00:03:22.449 net/enetfec: not in enabled drivers build config 00:03:22.449 net/enic: not in enabled drivers build config 00:03:22.449 net/failsafe: not in enabled drivers build config 00:03:22.449 net/fm10k: not in enabled drivers build config 00:03:22.449 net/gve: not in enabled drivers build config 00:03:22.449 net/hinic: not in enabled drivers build config 00:03:22.449 net/hns3: not in enabled drivers build config 00:03:22.449 net/i40e: not in enabled drivers build config 00:03:22.449 net/iavf: not in enabled drivers build config 00:03:22.449 net/ice: not in enabled drivers build config 00:03:22.449 net/idpf: not in enabled drivers build config 00:03:22.449 net/igc: not in enabled drivers build config 00:03:22.449 net/ionic: not in enabled drivers build config 00:03:22.449 net/ipn3ke: not in enabled drivers build config 00:03:22.449 net/ixgbe: not in enabled drivers build config 00:03:22.449 net/mana: not in enabled drivers build config 00:03:22.449 net/memif: not in enabled drivers build config 00:03:22.449 net/mlx4: not in enabled drivers build config 00:03:22.449 net/mlx5: not in enabled drivers build config 00:03:22.449 net/mvneta: not in enabled drivers build config 00:03:22.449 net/mvpp2: not in enabled drivers build config 00:03:22.449 net/netvsc: not in enabled drivers build config 00:03:22.449 net/nfb: not in enabled drivers build config 00:03:22.449 net/nfp: not in enabled drivers build config 00:03:22.449 net/ngbe: not in enabled drivers build config 00:03:22.449 net/null: not in enabled drivers build config 00:03:22.449 net/octeontx: not in enabled drivers build config 00:03:22.449 net/octeon_ep: not in enabled drivers build config 00:03:22.449 net/pcap: not in enabled drivers build config 00:03:22.449 net/pfe: not in enabled drivers build config 00:03:22.449 net/qede: not in enabled drivers build config 00:03:22.449 net/ring: not in enabled drivers build config 00:03:22.449 net/sfc: not in enabled drivers build config 00:03:22.449 net/softnic: not in enabled drivers build config 00:03:22.449 net/tap: not in enabled drivers build config 00:03:22.449 net/thunderx: not in enabled drivers build config 00:03:22.449 net/txgbe: not in enabled drivers build config 00:03:22.449 net/vdev_netvsc: not in enabled drivers build config 00:03:22.449 net/vhost: not in enabled drivers build config 00:03:22.449 net/virtio: not in enabled drivers build config 00:03:22.449 net/vmxnet3: not in enabled drivers build config 00:03:22.449 raw/*: missing internal dependency, "rawdev" 00:03:22.449 crypto/armv8: not in enabled drivers build config 00:03:22.449 crypto/bcmfs: not in enabled drivers build config 00:03:22.449 crypto/caam_jr: not in enabled drivers build config 00:03:22.449 crypto/ccp: not in enabled drivers build config 00:03:22.449 crypto/cnxk: not in enabled drivers build config 00:03:22.449 crypto/dpaa_sec: not in enabled drivers build config 00:03:22.449 crypto/dpaa2_sec: not in enabled drivers build config 00:03:22.449 crypto/ipsec_mb: not in enabled drivers build config 00:03:22.449 crypto/mlx5: not in enabled drivers build config 00:03:22.449 crypto/mvsam: not in enabled drivers build config 00:03:22.449 crypto/nitrox: not in enabled drivers build config 00:03:22.449 crypto/null: not in enabled drivers build config 00:03:22.449 crypto/octeontx: not in enabled drivers build config 00:03:22.449 crypto/openssl: not in enabled drivers build config 00:03:22.449 crypto/scheduler: not in enabled drivers build config 00:03:22.449 crypto/uadk: not in enabled drivers build config 00:03:22.449 crypto/virtio: not in enabled drivers build config 00:03:22.449 compress/isal: not in enabled drivers build config 00:03:22.449 compress/mlx5: not in enabled drivers build config 00:03:22.449 compress/nitrox: not in enabled drivers build config 00:03:22.449 compress/octeontx: not in enabled drivers build config 00:03:22.449 compress/zlib: not in enabled drivers build config 00:03:22.449 regex/*: missing internal dependency, "regexdev" 00:03:22.449 ml/*: missing internal dependency, "mldev" 00:03:22.449 vdpa/ifc: not in enabled drivers build config 00:03:22.449 vdpa/mlx5: not in enabled drivers build config 00:03:22.449 vdpa/nfp: not in enabled drivers build config 00:03:22.449 vdpa/sfc: not in enabled drivers build config 00:03:22.449 event/*: missing internal dependency, "eventdev" 00:03:22.449 baseband/*: missing internal dependency, "bbdev" 00:03:22.449 gpu/*: missing internal dependency, "gpudev" 00:03:22.449 00:03:22.449 00:03:22.449 Build targets in project: 84 00:03:22.449 00:03:22.449 DPDK 24.03.0 00:03:22.449 00:03:22.449 User defined options 00:03:22.449 buildtype : debug 00:03:22.450 default_library : shared 00:03:22.450 libdir : lib 00:03:22.450 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:22.450 b_sanitize : address 00:03:22.450 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:22.450 c_link_args : 00:03:22.450 cpu_instruction_set: native 00:03:22.450 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:22.450 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:22.450 enable_docs : false 00:03:22.450 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:03:22.450 enable_kmods : false 00:03:22.450 max_lcores : 128 00:03:22.450 tests : false 00:03:22.450 00:03:22.450 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:22.708 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:03:22.965 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:22.965 [2/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:22.965 [3/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:22.965 [4/267] Linking static target lib/librte_log.a 00:03:22.965 [5/267] Linking static target lib/librte_kvargs.a 00:03:22.965 [6/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:23.222 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:23.222 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:23.222 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:23.222 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:23.222 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:23.479 [12/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:23.479 [13/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.479 [14/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:23.479 [15/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:23.479 [16/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:23.479 [17/267] Linking static target lib/librte_telemetry.a 00:03:23.479 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:23.736 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:23.736 [20/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.736 [21/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:23.736 [22/267] Linking target lib/librte_log.so.24.1 00:03:23.736 [23/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:23.736 [24/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:23.736 [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:23.993 [26/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:23.993 [27/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:23.993 [28/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:23.993 [29/267] Linking target lib/librte_kvargs.so.24.1 00:03:23.993 [30/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:24.271 [31/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:24.271 [32/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:24.271 [33/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.271 [34/267] Linking target lib/librte_telemetry.so.24.1 00:03:24.271 [35/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:24.271 [36/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:24.271 [37/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:24.271 [38/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:24.271 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:24.271 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:24.271 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:24.271 [42/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:24.528 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:24.528 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:24.528 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:24.528 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:24.785 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:24.785 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:24.785 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:24.785 [50/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:24.785 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:24.785 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:24.785 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:24.785 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:25.043 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:25.043 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:25.043 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:25.043 [58/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:25.043 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:25.300 [60/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:25.300 [61/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:25.300 [62/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:25.300 [63/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:25.300 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:25.300 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:25.300 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:25.300 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:25.557 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:25.557 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:25.557 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:25.557 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:25.814 [72/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:25.814 [73/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:25.814 [74/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:25.814 [75/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:25.814 [76/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:25.814 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:25.814 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:26.071 [79/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:26.071 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:26.071 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:26.071 [82/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:26.071 [83/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:26.328 [84/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:26.328 [85/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:26.328 [86/267] Linking static target lib/librte_ring.a 00:03:26.328 [87/267] Linking static target lib/librte_eal.a 00:03:26.328 [88/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:26.328 [89/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:26.328 [90/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:26.328 [91/267] Linking static target lib/librte_rcu.a 00:03:26.328 [92/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:26.328 [93/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:26.328 [94/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:26.586 [95/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:26.586 [96/267] Linking static target lib/librte_mempool.a 00:03:26.586 [97/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.586 [98/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:26.586 [99/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:26.586 [100/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:26.586 [101/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.844 [102/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:26.844 [103/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:26.844 [104/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:26.844 [105/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:03:26.844 [106/267] Linking static target lib/librte_net.a 00:03:27.102 [107/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:27.102 [108/267] Linking static target lib/librte_meter.a 00:03:27.102 [109/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:27.102 [110/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:27.102 [111/267] Linking static target lib/librte_mbuf.a 00:03:27.102 [112/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:27.102 [113/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:27.359 [114/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:27.359 [115/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.359 [116/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.359 [117/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:27.617 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:27.617 [119/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.617 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:27.874 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:27.874 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:27.874 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:27.874 [124/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.874 [125/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:27.874 [126/267] Linking static target lib/librte_pci.a 00:03:28.131 [127/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:28.131 [128/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:28.131 [129/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:28.131 [130/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:28.131 [131/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:28.389 [132/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:28.389 [133/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:28.389 [134/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.389 [135/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:28.389 [136/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:28.389 [137/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:28.389 [138/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:28.389 [139/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:28.389 [140/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:28.389 [141/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:28.389 [142/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:28.389 [143/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:28.389 [144/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:28.645 [145/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:28.645 [146/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:28.645 [147/267] Linking static target lib/librte_cmdline.a 00:03:28.902 [148/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:28.902 [149/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:28.902 [150/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:28.902 [151/267] Linking static target lib/librte_timer.a 00:03:28.902 [152/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:28.902 [153/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:29.159 [154/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:29.159 [155/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:29.159 [156/267] Linking static target lib/librte_compressdev.a 00:03:29.159 [157/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:29.442 [158/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:29.442 [159/267] Linking static target lib/librte_ethdev.a 00:03:29.442 [160/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:29.442 [161/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:29.442 [162/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:29.442 [163/267] Linking static target lib/librte_hash.a 00:03:29.442 [164/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:29.442 [165/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.699 [166/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:29.699 [167/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:29.699 [168/267] Linking static target lib/librte_dmadev.a 00:03:29.699 [169/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:29.956 [170/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:29.956 [171/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:29.956 [172/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.956 [173/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:29.956 [174/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:30.213 [175/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:30.213 [176/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:30.213 [177/267] Linking static target lib/librte_cryptodev.a 00:03:30.213 [178/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:30.213 [179/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:30.213 [180/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:30.213 [181/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:30.471 [182/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:30.471 [183/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:30.471 [184/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:30.471 [185/267] Linking static target lib/librte_power.a 00:03:30.728 [186/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:30.728 [187/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:30.728 [188/267] Linking static target lib/librte_reorder.a 00:03:30.728 [189/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:30.728 [190/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:30.728 [191/267] Linking static target lib/librte_security.a 00:03:30.728 [192/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:30.986 [193/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:31.243 [194/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:31.243 [195/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:31.243 [196/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:31.243 [197/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:31.501 [198/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:31.501 [199/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:31.501 [200/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:31.501 [201/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:31.759 [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:31.759 [203/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:31.759 [204/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:32.017 [205/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:32.017 [206/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.017 [207/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:32.017 [208/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:32.017 [209/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:32.017 [210/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:32.274 [211/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:32.274 [212/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:32.274 [213/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:32.274 [214/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:32.274 [215/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:32.274 [216/267] Linking static target drivers/librte_bus_vdev.a 00:03:32.274 [217/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:32.274 [218/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:32.274 [219/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:32.274 [220/267] Linking static target drivers/librte_bus_pci.a 00:03:32.274 [221/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:32.274 [222/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:32.274 [223/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:32.274 [224/267] Linking static target drivers/librte_mempool_ring.a 00:03:32.531 [225/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.789 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.354 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:33.918 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.918 [229/267] Linking target lib/librte_eal.so.24.1 00:03:33.918 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:33.918 [231/267] Linking target drivers/librte_bus_vdev.so.24.1 00:03:33.918 [232/267] Linking target lib/librte_timer.so.24.1 00:03:33.918 [233/267] Linking target lib/librte_ring.so.24.1 00:03:33.918 [234/267] Linking target lib/librte_pci.so.24.1 00:03:33.918 [235/267] Linking target lib/librte_dmadev.so.24.1 00:03:33.918 [236/267] Linking target lib/librte_meter.so.24.1 00:03:33.918 [237/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:34.177 [238/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:34.177 [239/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:34.177 [240/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:34.177 [241/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:34.177 [242/267] Linking target drivers/librte_bus_pci.so.24.1 00:03:34.177 [243/267] Linking target lib/librte_mempool.so.24.1 00:03:34.177 [244/267] Linking target lib/librte_rcu.so.24.1 00:03:34.177 [245/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:34.177 [246/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:34.177 [247/267] Linking target drivers/librte_mempool_ring.so.24.1 00:03:34.177 [248/267] Linking target lib/librte_mbuf.so.24.1 00:03:34.435 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:34.435 [250/267] Linking target lib/librte_reorder.so.24.1 00:03:34.435 [251/267] Linking target lib/librte_net.so.24.1 00:03:34.435 [252/267] Linking target lib/librte_compressdev.so.24.1 00:03:34.435 [253/267] Linking target lib/librte_cryptodev.so.24.1 00:03:34.435 [254/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:34.435 [255/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:34.692 [256/267] Linking target lib/librte_cmdline.so.24.1 00:03:34.692 [257/267] Linking target lib/librte_hash.so.24.1 00:03:34.692 [258/267] Linking target lib/librte_security.so.24.1 00:03:34.692 [259/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:34.692 [260/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:34.692 [261/267] Linking target lib/librte_ethdev.so.24.1 00:03:34.949 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:34.949 [263/267] Linking target lib/librte_power.so.24.1 00:03:36.319 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:36.319 [265/267] Linking static target lib/librte_vhost.a 00:03:37.738 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.738 [267/267] Linking target lib/librte_vhost.so.24.1 00:03:37.738 INFO: autodetecting backend as ninja 00:03:37.738 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:52.685 CC lib/log/log.o 00:03:52.685 CC lib/log/log_flags.o 00:03:52.685 CC lib/log/log_deprecated.o 00:03:52.685 CC lib/ut_mock/mock.o 00:03:52.685 CC lib/ut/ut.o 00:03:52.685 LIB libspdk_ut.a 00:03:52.685 LIB libspdk_ut_mock.a 00:03:52.685 SO libspdk_ut.so.2.0 00:03:52.685 LIB libspdk_log.a 00:03:52.685 SO libspdk_ut_mock.so.6.0 00:03:52.685 SYMLINK libspdk_ut.so 00:03:52.685 SO libspdk_log.so.7.1 00:03:52.685 SYMLINK libspdk_ut_mock.so 00:03:52.685 SYMLINK libspdk_log.so 00:03:52.685 CXX lib/trace_parser/trace.o 00:03:52.685 CC lib/ioat/ioat.o 00:03:52.685 CC lib/dma/dma.o 00:03:52.685 CC lib/util/base64.o 00:03:52.685 CC lib/util/cpuset.o 00:03:52.685 CC lib/util/bit_array.o 00:03:52.685 CC lib/util/crc16.o 00:03:52.685 CC lib/util/crc32.o 00:03:52.685 CC lib/util/crc32c.o 00:03:52.685 CC lib/vfio_user/host/vfio_user_pci.o 00:03:52.685 CC lib/vfio_user/host/vfio_user.o 00:03:52.685 CC lib/util/crc32_ieee.o 00:03:52.685 CC lib/util/crc64.o 00:03:52.685 CC lib/util/dif.o 00:03:52.685 CC lib/util/fd.o 00:03:52.685 LIB libspdk_dma.a 00:03:52.685 CC lib/util/fd_group.o 00:03:52.685 SO libspdk_dma.so.5.0 00:03:52.685 CC lib/util/file.o 00:03:52.685 CC lib/util/hexlify.o 00:03:52.685 SYMLINK libspdk_dma.so 00:03:52.685 LIB libspdk_ioat.a 00:03:52.685 CC lib/util/iov.o 00:03:52.685 SO libspdk_ioat.so.7.0 00:03:52.685 CC lib/util/math.o 00:03:52.685 CC lib/util/net.o 00:03:52.685 LIB libspdk_vfio_user.a 00:03:52.685 SYMLINK libspdk_ioat.so 00:03:52.685 CC lib/util/pipe.o 00:03:52.685 CC lib/util/strerror_tls.o 00:03:52.685 SO libspdk_vfio_user.so.5.0 00:03:52.685 CC lib/util/string.o 00:03:52.685 SYMLINK libspdk_vfio_user.so 00:03:52.685 CC lib/util/uuid.o 00:03:52.685 CC lib/util/xor.o 00:03:52.685 CC lib/util/zipf.o 00:03:52.685 CC lib/util/md5.o 00:03:52.944 LIB libspdk_util.a 00:03:52.944 SO libspdk_util.so.10.1 00:03:53.203 SYMLINK libspdk_util.so 00:03:53.203 LIB libspdk_trace_parser.a 00:03:53.203 SO libspdk_trace_parser.so.6.0 00:03:53.203 CC lib/json/json_parse.o 00:03:53.203 CC lib/json/json_util.o 00:03:53.203 CC lib/vmd/vmd.o 00:03:53.203 CC lib/conf/conf.o 00:03:53.203 CC lib/env_dpdk/env.o 00:03:53.203 CC lib/idxd/idxd.o 00:03:53.203 CC lib/rdma_utils/rdma_utils.o 00:03:53.203 CC lib/idxd/idxd_user.o 00:03:53.203 CC lib/vmd/led.o 00:03:53.463 SYMLINK libspdk_trace_parser.so 00:03:53.463 CC lib/idxd/idxd_kernel.o 00:03:53.463 CC lib/env_dpdk/memory.o 00:03:53.463 LIB libspdk_conf.a 00:03:53.463 CC lib/json/json_write.o 00:03:53.463 CC lib/env_dpdk/pci.o 00:03:53.463 SO libspdk_conf.so.6.0 00:03:53.463 CC lib/env_dpdk/init.o 00:03:53.463 CC lib/env_dpdk/threads.o 00:03:53.463 LIB libspdk_rdma_utils.a 00:03:53.463 SO libspdk_rdma_utils.so.1.0 00:03:53.463 SYMLINK libspdk_conf.so 00:03:53.463 CC lib/env_dpdk/pci_ioat.o 00:03:53.463 SYMLINK libspdk_rdma_utils.so 00:03:53.463 CC lib/env_dpdk/pci_virtio.o 00:03:53.725 CC lib/env_dpdk/pci_vmd.o 00:03:53.725 CC lib/env_dpdk/pci_idxd.o 00:03:53.725 CC lib/env_dpdk/pci_event.o 00:03:53.725 LIB libspdk_json.a 00:03:53.725 CC lib/env_dpdk/sigbus_handler.o 00:03:53.725 CC lib/env_dpdk/pci_dpdk.o 00:03:53.725 SO libspdk_json.so.6.0 00:03:53.725 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:53.725 SYMLINK libspdk_json.so 00:03:53.725 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:53.984 LIB libspdk_idxd.a 00:03:53.985 SO libspdk_idxd.so.12.1 00:03:53.985 LIB libspdk_vmd.a 00:03:53.985 SO libspdk_vmd.so.6.0 00:03:53.985 SYMLINK libspdk_idxd.so 00:03:53.985 CC lib/rdma_provider/common.o 00:03:53.985 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:53.985 SYMLINK libspdk_vmd.so 00:03:53.985 CC lib/jsonrpc/jsonrpc_server.o 00:03:53.985 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:53.985 CC lib/jsonrpc/jsonrpc_client.o 00:03:53.985 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:54.246 LIB libspdk_rdma_provider.a 00:03:54.246 SO libspdk_rdma_provider.so.7.0 00:03:54.246 SYMLINK libspdk_rdma_provider.so 00:03:54.246 LIB libspdk_jsonrpc.a 00:03:54.246 SO libspdk_jsonrpc.so.6.0 00:03:54.538 SYMLINK libspdk_jsonrpc.so 00:03:54.538 CC lib/rpc/rpc.o 00:03:54.799 LIB libspdk_env_dpdk.a 00:03:54.799 SO libspdk_env_dpdk.so.15.1 00:03:54.799 LIB libspdk_rpc.a 00:03:54.799 SO libspdk_rpc.so.6.0 00:03:54.799 SYMLINK libspdk_env_dpdk.so 00:03:54.799 SYMLINK libspdk_rpc.so 00:03:55.059 CC lib/keyring/keyring.o 00:03:55.059 CC lib/keyring/keyring_rpc.o 00:03:55.059 CC lib/trace/trace.o 00:03:55.059 CC lib/trace/trace_rpc.o 00:03:55.059 CC lib/trace/trace_flags.o 00:03:55.059 CC lib/notify/notify.o 00:03:55.059 CC lib/notify/notify_rpc.o 00:03:55.320 LIB libspdk_notify.a 00:03:55.320 SO libspdk_notify.so.6.0 00:03:55.320 LIB libspdk_keyring.a 00:03:55.320 SYMLINK libspdk_notify.so 00:03:55.320 LIB libspdk_trace.a 00:03:55.320 SO libspdk_keyring.so.2.0 00:03:55.320 SO libspdk_trace.so.11.0 00:03:55.320 SYMLINK libspdk_keyring.so 00:03:55.320 SYMLINK libspdk_trace.so 00:03:55.578 CC lib/thread/thread.o 00:03:55.578 CC lib/thread/iobuf.o 00:03:55.578 CC lib/sock/sock_rpc.o 00:03:55.578 CC lib/sock/sock.o 00:03:56.163 LIB libspdk_sock.a 00:03:56.164 SO libspdk_sock.so.10.0 00:03:56.164 SYMLINK libspdk_sock.so 00:03:56.423 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:56.423 CC lib/nvme/nvme_ctrlr.o 00:03:56.423 CC lib/nvme/nvme_fabric.o 00:03:56.423 CC lib/nvme/nvme_ns_cmd.o 00:03:56.423 CC lib/nvme/nvme_ns.o 00:03:56.423 CC lib/nvme/nvme_pcie_common.o 00:03:56.423 CC lib/nvme/nvme_pcie.o 00:03:56.423 CC lib/nvme/nvme.o 00:03:56.423 CC lib/nvme/nvme_qpair.o 00:03:56.994 CC lib/nvme/nvme_quirks.o 00:03:56.994 CC lib/nvme/nvme_transport.o 00:03:56.994 CC lib/nvme/nvme_discovery.o 00:03:56.994 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:57.256 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:57.256 LIB libspdk_thread.a 00:03:57.256 CC lib/nvme/nvme_tcp.o 00:03:57.256 SO libspdk_thread.so.11.0 00:03:57.256 CC lib/nvme/nvme_opal.o 00:03:57.256 SYMLINK libspdk_thread.so 00:03:57.256 CC lib/nvme/nvme_io_msg.o 00:03:57.517 CC lib/nvme/nvme_poll_group.o 00:03:57.517 CC lib/nvme/nvme_zns.o 00:03:57.517 CC lib/nvme/nvme_stubs.o 00:03:57.517 CC lib/nvme/nvme_auth.o 00:03:57.776 CC lib/nvme/nvme_cuse.o 00:03:57.776 CC lib/nvme/nvme_vfio_user.o 00:03:57.776 CC lib/nvme/nvme_rdma.o 00:03:58.036 CC lib/accel/accel.o 00:03:58.036 CC lib/accel/accel_rpc.o 00:03:58.036 CC lib/accel/accel_sw.o 00:03:58.296 CC lib/blob/blobstore.o 00:03:58.296 CC lib/init/json_config.o 00:03:58.296 CC lib/blob/request.o 00:03:58.296 CC lib/blob/zeroes.o 00:03:58.555 CC lib/init/subsystem.o 00:03:58.555 CC lib/virtio/virtio.o 00:03:58.555 CC lib/blob/blob_bs_dev.o 00:03:58.555 CC lib/init/subsystem_rpc.o 00:03:58.555 CC lib/vfu_tgt/tgt_endpoint.o 00:03:58.813 CC lib/init/rpc.o 00:03:58.813 CC lib/fsdev/fsdev.o 00:03:58.813 CC lib/fsdev/fsdev_io.o 00:03:58.813 CC lib/fsdev/fsdev_rpc.o 00:03:58.813 LIB libspdk_init.a 00:03:58.813 CC lib/vfu_tgt/tgt_rpc.o 00:03:58.813 SO libspdk_init.so.6.0 00:03:58.813 CC lib/virtio/virtio_vhost_user.o 00:03:58.813 CC lib/virtio/virtio_vfio_user.o 00:03:59.069 SYMLINK libspdk_init.so 00:03:59.069 CC lib/virtio/virtio_pci.o 00:03:59.069 LIB libspdk_vfu_tgt.a 00:03:59.069 SO libspdk_vfu_tgt.so.3.0 00:03:59.069 LIB libspdk_accel.a 00:03:59.069 SYMLINK libspdk_vfu_tgt.so 00:03:59.069 SO libspdk_accel.so.16.0 00:03:59.069 CC lib/event/app.o 00:03:59.069 CC lib/event/reactor.o 00:03:59.069 CC lib/event/log_rpc.o 00:03:59.069 CC lib/event/app_rpc.o 00:03:59.069 LIB libspdk_nvme.a 00:03:59.069 SYMLINK libspdk_accel.so 00:03:59.069 CC lib/event/scheduler_static.o 00:03:59.326 LIB libspdk_virtio.a 00:03:59.326 SO libspdk_virtio.so.7.0 00:03:59.326 SO libspdk_nvme.so.15.0 00:03:59.326 LIB libspdk_fsdev.a 00:03:59.326 SYMLINK libspdk_virtio.so 00:03:59.326 SO libspdk_fsdev.so.2.0 00:03:59.326 CC lib/bdev/bdev.o 00:03:59.326 CC lib/bdev/bdev_rpc.o 00:03:59.326 CC lib/bdev/bdev_zone.o 00:03:59.326 CC lib/bdev/part.o 00:03:59.326 CC lib/bdev/scsi_nvme.o 00:03:59.326 SYMLINK libspdk_fsdev.so 00:03:59.639 SYMLINK libspdk_nvme.so 00:03:59.639 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:59.639 LIB libspdk_event.a 00:03:59.639 SO libspdk_event.so.14.0 00:03:59.639 SYMLINK libspdk_event.so 00:04:00.205 LIB libspdk_fuse_dispatcher.a 00:04:00.205 SO libspdk_fuse_dispatcher.so.1.0 00:04:00.462 SYMLINK libspdk_fuse_dispatcher.so 00:04:01.843 LIB libspdk_blob.a 00:04:01.843 SO libspdk_blob.so.11.0 00:04:01.843 SYMLINK libspdk_blob.so 00:04:01.843 CC lib/lvol/lvol.o 00:04:01.843 CC lib/blobfs/blobfs.o 00:04:01.843 CC lib/blobfs/tree.o 00:04:02.100 LIB libspdk_bdev.a 00:04:02.360 SO libspdk_bdev.so.17.0 00:04:02.360 SYMLINK libspdk_bdev.so 00:04:02.620 CC lib/nbd/nbd.o 00:04:02.620 CC lib/scsi/dev.o 00:04:02.620 CC lib/nbd/nbd_rpc.o 00:04:02.620 CC lib/scsi/lun.o 00:04:02.620 CC lib/ftl/ftl_init.o 00:04:02.620 CC lib/ftl/ftl_core.o 00:04:02.620 CC lib/ublk/ublk.o 00:04:02.620 CC lib/nvmf/ctrlr.o 00:04:02.620 CC lib/nvmf/ctrlr_discovery.o 00:04:02.881 CC lib/nvmf/ctrlr_bdev.o 00:04:02.881 CC lib/nvmf/subsystem.o 00:04:02.881 LIB libspdk_blobfs.a 00:04:02.881 SO libspdk_blobfs.so.10.0 00:04:02.881 CC lib/scsi/port.o 00:04:02.881 LIB libspdk_lvol.a 00:04:02.881 SO libspdk_lvol.so.10.0 00:04:02.881 SYMLINK libspdk_blobfs.so 00:04:02.881 CC lib/nvmf/nvmf.o 00:04:02.881 CC lib/ftl/ftl_layout.o 00:04:02.881 SYMLINK libspdk_lvol.so 00:04:02.881 CC lib/ftl/ftl_debug.o 00:04:02.881 LIB libspdk_nbd.a 00:04:02.881 SO libspdk_nbd.so.7.0 00:04:02.881 CC lib/scsi/scsi.o 00:04:03.145 SYMLINK libspdk_nbd.so 00:04:03.145 CC lib/scsi/scsi_bdev.o 00:04:03.145 CC lib/ftl/ftl_io.o 00:04:03.145 CC lib/nvmf/nvmf_rpc.o 00:04:03.145 CC lib/ublk/ublk_rpc.o 00:04:03.145 CC lib/ftl/ftl_sb.o 00:04:03.145 CC lib/ftl/ftl_l2p.o 00:04:03.405 CC lib/ftl/ftl_l2p_flat.o 00:04:03.405 LIB libspdk_ublk.a 00:04:03.405 CC lib/ftl/ftl_nv_cache.o 00:04:03.405 SO libspdk_ublk.so.3.0 00:04:03.405 CC lib/scsi/scsi_pr.o 00:04:03.405 SYMLINK libspdk_ublk.so 00:04:03.405 CC lib/nvmf/transport.o 00:04:03.405 CC lib/nvmf/tcp.o 00:04:03.665 CC lib/ftl/ftl_band.o 00:04:03.665 CC lib/ftl/ftl_band_ops.o 00:04:03.665 CC lib/scsi/scsi_rpc.o 00:04:03.665 CC lib/nvmf/stubs.o 00:04:03.926 CC lib/scsi/task.o 00:04:03.926 CC lib/nvmf/mdns_server.o 00:04:03.926 CC lib/ftl/ftl_writer.o 00:04:03.926 CC lib/nvmf/vfio_user.o 00:04:03.926 CC lib/nvmf/rdma.o 00:04:04.187 CC lib/nvmf/auth.o 00:04:04.187 LIB libspdk_scsi.a 00:04:04.187 CC lib/ftl/ftl_rq.o 00:04:04.187 CC lib/ftl/ftl_reloc.o 00:04:04.187 SO libspdk_scsi.so.9.0 00:04:04.474 CC lib/ftl/ftl_l2p_cache.o 00:04:04.474 SYMLINK libspdk_scsi.so 00:04:04.474 CC lib/ftl/ftl_p2l.o 00:04:04.474 CC lib/ftl/ftl_p2l_log.o 00:04:04.474 CC lib/iscsi/conn.o 00:04:04.474 CC lib/iscsi/init_grp.o 00:04:04.474 CC lib/iscsi/iscsi.o 00:04:04.734 CC lib/iscsi/param.o 00:04:04.734 CC lib/iscsi/portal_grp.o 00:04:04.734 CC lib/ftl/mngt/ftl_mngt.o 00:04:04.994 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:04.994 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:04.994 CC lib/iscsi/tgt_node.o 00:04:04.994 CC lib/iscsi/iscsi_subsystem.o 00:04:04.994 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:04.994 CC lib/iscsi/iscsi_rpc.o 00:04:04.994 CC lib/vhost/vhost.o 00:04:05.255 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:05.255 CC lib/vhost/vhost_rpc.o 00:04:05.255 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:05.516 CC lib/vhost/vhost_scsi.o 00:04:05.516 CC lib/vhost/vhost_blk.o 00:04:05.516 CC lib/iscsi/task.o 00:04:05.516 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:05.516 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:05.779 CC lib/vhost/rte_vhost_user.o 00:04:05.779 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:05.779 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:05.779 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:05.779 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:05.779 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:05.779 CC lib/ftl/utils/ftl_conf.o 00:04:05.779 CC lib/ftl/utils/ftl_md.o 00:04:05.779 CC lib/ftl/utils/ftl_mempool.o 00:04:06.041 CC lib/ftl/utils/ftl_bitmap.o 00:04:06.041 LIB libspdk_iscsi.a 00:04:06.041 CC lib/ftl/utils/ftl_property.o 00:04:06.041 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:06.041 SO libspdk_iscsi.so.8.0 00:04:06.041 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:06.041 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:06.299 LIB libspdk_nvmf.a 00:04:06.299 SYMLINK libspdk_iscsi.so 00:04:06.299 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:06.299 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:06.299 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:06.299 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:06.299 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:06.299 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:06.299 SO libspdk_nvmf.so.20.0 00:04:06.299 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:06.299 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:06.299 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:06.559 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:06.559 CC lib/ftl/base/ftl_base_dev.o 00:04:06.559 CC lib/ftl/base/ftl_base_bdev.o 00:04:06.559 CC lib/ftl/ftl_trace.o 00:04:06.559 SYMLINK libspdk_nvmf.so 00:04:06.559 LIB libspdk_vhost.a 00:04:06.818 SO libspdk_vhost.so.8.0 00:04:06.818 LIB libspdk_ftl.a 00:04:06.818 SYMLINK libspdk_vhost.so 00:04:06.818 SO libspdk_ftl.so.9.0 00:04:07.079 SYMLINK libspdk_ftl.so 00:04:07.339 CC module/env_dpdk/env_dpdk_rpc.o 00:04:07.339 CC module/vfu_device/vfu_virtio.o 00:04:07.600 CC module/keyring/file/keyring.o 00:04:07.600 CC module/fsdev/aio/fsdev_aio.o 00:04:07.600 CC module/accel/error/accel_error.o 00:04:07.600 CC module/blob/bdev/blob_bdev.o 00:04:07.600 CC module/accel/ioat/accel_ioat.o 00:04:07.600 CC module/sock/posix/posix.o 00:04:07.600 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:07.600 CC module/keyring/linux/keyring.o 00:04:07.600 LIB libspdk_env_dpdk_rpc.a 00:04:07.600 SO libspdk_env_dpdk_rpc.so.6.0 00:04:07.600 CC module/keyring/file/keyring_rpc.o 00:04:07.600 SYMLINK libspdk_env_dpdk_rpc.so 00:04:07.600 CC module/keyring/linux/keyring_rpc.o 00:04:07.600 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:07.600 CC module/accel/error/accel_error_rpc.o 00:04:07.600 CC module/accel/ioat/accel_ioat_rpc.o 00:04:07.600 LIB libspdk_scheduler_dynamic.a 00:04:07.862 SO libspdk_scheduler_dynamic.so.4.0 00:04:07.862 LIB libspdk_keyring_file.a 00:04:07.862 LIB libspdk_blob_bdev.a 00:04:07.862 SO libspdk_keyring_file.so.2.0 00:04:07.862 SO libspdk_blob_bdev.so.11.0 00:04:07.862 LIB libspdk_accel_error.a 00:04:07.862 SYMLINK libspdk_scheduler_dynamic.so 00:04:07.862 LIB libspdk_keyring_linux.a 00:04:07.862 LIB libspdk_accel_ioat.a 00:04:07.862 SO libspdk_accel_error.so.2.0 00:04:07.862 CC module/fsdev/aio/linux_aio_mgr.o 00:04:07.862 SYMLINK libspdk_keyring_file.so 00:04:07.862 SO libspdk_keyring_linux.so.1.0 00:04:07.862 SO libspdk_accel_ioat.so.6.0 00:04:07.862 SYMLINK libspdk_blob_bdev.so 00:04:07.862 SYMLINK libspdk_accel_error.so 00:04:07.862 SYMLINK libspdk_keyring_linux.so 00:04:07.862 SYMLINK libspdk_accel_ioat.so 00:04:08.124 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:08.124 CC module/scheduler/gscheduler/gscheduler.o 00:04:08.124 CC module/vfu_device/vfu_virtio_blk.o 00:04:08.124 CC module/accel/dsa/accel_dsa.o 00:04:08.124 CC module/accel/iaa/accel_iaa.o 00:04:08.124 CC module/vfu_device/vfu_virtio_scsi.o 00:04:08.124 CC module/blobfs/bdev/blobfs_bdev.o 00:04:08.124 CC module/bdev/delay/vbdev_delay.o 00:04:08.124 LIB libspdk_scheduler_gscheduler.a 00:04:08.124 LIB libspdk_scheduler_dpdk_governor.a 00:04:08.124 SO libspdk_scheduler_gscheduler.so.4.0 00:04:08.124 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:08.124 LIB libspdk_fsdev_aio.a 00:04:08.124 SO libspdk_fsdev_aio.so.1.0 00:04:08.124 SYMLINK libspdk_scheduler_gscheduler.so 00:04:08.124 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:08.124 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:08.124 CC module/accel/iaa/accel_iaa_rpc.o 00:04:08.386 LIB libspdk_sock_posix.a 00:04:08.386 CC module/vfu_device/vfu_virtio_rpc.o 00:04:08.386 SYMLINK libspdk_fsdev_aio.so 00:04:08.386 CC module/vfu_device/vfu_virtio_fs.o 00:04:08.386 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:08.386 CC module/accel/dsa/accel_dsa_rpc.o 00:04:08.386 SO libspdk_sock_posix.so.6.0 00:04:08.386 LIB libspdk_accel_iaa.a 00:04:08.386 SO libspdk_accel_iaa.so.3.0 00:04:08.386 SYMLINK libspdk_sock_posix.so 00:04:08.386 LIB libspdk_accel_dsa.a 00:04:08.386 SYMLINK libspdk_accel_iaa.so 00:04:08.386 SO libspdk_accel_dsa.so.5.0 00:04:08.386 LIB libspdk_blobfs_bdev.a 00:04:08.386 SO libspdk_blobfs_bdev.so.6.0 00:04:08.386 LIB libspdk_bdev_delay.a 00:04:08.386 LIB libspdk_vfu_device.a 00:04:08.648 CC module/bdev/error/vbdev_error.o 00:04:08.648 SYMLINK libspdk_accel_dsa.so 00:04:08.648 SO libspdk_bdev_delay.so.6.0 00:04:08.648 CC module/bdev/lvol/vbdev_lvol.o 00:04:08.648 CC module/bdev/gpt/gpt.o 00:04:08.648 SO libspdk_vfu_device.so.3.0 00:04:08.648 SYMLINK libspdk_blobfs_bdev.so 00:04:08.648 CC module/bdev/malloc/bdev_malloc.o 00:04:08.648 SYMLINK libspdk_bdev_delay.so 00:04:08.648 CC module/bdev/null/bdev_null.o 00:04:08.648 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:08.648 CC module/bdev/nvme/bdev_nvme.o 00:04:08.648 SYMLINK libspdk_vfu_device.so 00:04:08.648 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:08.648 CC module/bdev/passthru/vbdev_passthru.o 00:04:08.648 CC module/bdev/raid/bdev_raid.o 00:04:08.648 CC module/bdev/gpt/vbdev_gpt.o 00:04:08.908 CC module/bdev/error/vbdev_error_rpc.o 00:04:08.908 CC module/bdev/null/bdev_null_rpc.o 00:04:08.908 LIB libspdk_bdev_error.a 00:04:08.908 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:08.908 CC module/bdev/raid/bdev_raid_rpc.o 00:04:08.908 SO libspdk_bdev_error.so.6.0 00:04:08.908 LIB libspdk_bdev_gpt.a 00:04:08.908 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:08.908 SO libspdk_bdev_gpt.so.6.0 00:04:08.908 LIB libspdk_bdev_null.a 00:04:08.908 LIB libspdk_bdev_lvol.a 00:04:09.169 SYMLINK libspdk_bdev_error.so 00:04:09.169 SO libspdk_bdev_null.so.6.0 00:04:09.169 SO libspdk_bdev_lvol.so.6.0 00:04:09.169 SYMLINK libspdk_bdev_gpt.so 00:04:09.169 CC module/bdev/nvme/nvme_rpc.o 00:04:09.169 LIB libspdk_bdev_malloc.a 00:04:09.169 SYMLINK libspdk_bdev_null.so 00:04:09.169 SYMLINK libspdk_bdev_lvol.so 00:04:09.169 CC module/bdev/nvme/bdev_mdns_client.o 00:04:09.169 SO libspdk_bdev_malloc.so.6.0 00:04:09.169 LIB libspdk_bdev_passthru.a 00:04:09.169 CC module/bdev/split/vbdev_split.o 00:04:09.169 SO libspdk_bdev_passthru.so.6.0 00:04:09.169 SYMLINK libspdk_bdev_malloc.so 00:04:09.169 CC module/bdev/split/vbdev_split_rpc.o 00:04:09.169 SYMLINK libspdk_bdev_passthru.so 00:04:09.169 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:09.430 CC module/bdev/nvme/vbdev_opal.o 00:04:09.430 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:09.430 CC module/bdev/aio/bdev_aio.o 00:04:09.430 CC module/bdev/raid/bdev_raid_sb.o 00:04:09.430 LIB libspdk_bdev_split.a 00:04:09.430 CC module/bdev/ftl/bdev_ftl.o 00:04:09.430 CC module/bdev/iscsi/bdev_iscsi.o 00:04:09.430 SO libspdk_bdev_split.so.6.0 00:04:09.430 SYMLINK libspdk_bdev_split.so 00:04:09.430 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:09.430 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:09.689 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:09.689 LIB libspdk_bdev_zone_block.a 00:04:09.689 SO libspdk_bdev_zone_block.so.6.0 00:04:09.689 CC module/bdev/raid/raid0.o 00:04:09.689 CC module/bdev/raid/raid1.o 00:04:09.689 SYMLINK libspdk_bdev_zone_block.so 00:04:09.689 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:09.689 CC module/bdev/aio/bdev_aio_rpc.o 00:04:09.689 LIB libspdk_bdev_ftl.a 00:04:09.689 SO libspdk_bdev_ftl.so.6.0 00:04:09.689 CC module/bdev/raid/concat.o 00:04:09.948 SYMLINK libspdk_bdev_ftl.so 00:04:09.948 LIB libspdk_bdev_iscsi.a 00:04:09.948 SO libspdk_bdev_iscsi.so.6.0 00:04:09.948 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:09.948 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:09.948 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:09.948 LIB libspdk_bdev_aio.a 00:04:09.948 SYMLINK libspdk_bdev_iscsi.so 00:04:09.948 SO libspdk_bdev_aio.so.6.0 00:04:09.948 SYMLINK libspdk_bdev_aio.so 00:04:09.948 LIB libspdk_bdev_raid.a 00:04:10.211 SO libspdk_bdev_raid.so.6.0 00:04:10.211 SYMLINK libspdk_bdev_raid.so 00:04:10.473 LIB libspdk_bdev_virtio.a 00:04:10.473 SO libspdk_bdev_virtio.so.6.0 00:04:10.473 SYMLINK libspdk_bdev_virtio.so 00:04:11.415 LIB libspdk_bdev_nvme.a 00:04:11.415 SO libspdk_bdev_nvme.so.7.1 00:04:11.676 SYMLINK libspdk_bdev_nvme.so 00:04:11.938 CC module/event/subsystems/iobuf/iobuf.o 00:04:11.938 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:11.938 CC module/event/subsystems/sock/sock.o 00:04:11.938 CC module/event/subsystems/vmd/vmd.o 00:04:11.938 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:11.938 CC module/event/subsystems/scheduler/scheduler.o 00:04:11.938 CC module/event/subsystems/keyring/keyring.o 00:04:11.938 CC module/event/subsystems/fsdev/fsdev.o 00:04:11.938 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:11.938 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:04:12.197 LIB libspdk_event_keyring.a 00:04:12.197 LIB libspdk_event_sock.a 00:04:12.197 LIB libspdk_event_scheduler.a 00:04:12.197 LIB libspdk_event_vmd.a 00:04:12.197 LIB libspdk_event_fsdev.a 00:04:12.197 LIB libspdk_event_vfu_tgt.a 00:04:12.197 SO libspdk_event_sock.so.5.0 00:04:12.197 SO libspdk_event_scheduler.so.4.0 00:04:12.197 SO libspdk_event_keyring.so.1.0 00:04:12.197 LIB libspdk_event_vhost_blk.a 00:04:12.197 SO libspdk_event_vmd.so.6.0 00:04:12.197 SO libspdk_event_fsdev.so.1.0 00:04:12.197 SO libspdk_event_vfu_tgt.so.3.0 00:04:12.197 SO libspdk_event_vhost_blk.so.3.0 00:04:12.197 LIB libspdk_event_iobuf.a 00:04:12.197 SYMLINK libspdk_event_scheduler.so 00:04:12.197 SYMLINK libspdk_event_sock.so 00:04:12.197 SYMLINK libspdk_event_keyring.so 00:04:12.197 SYMLINK libspdk_event_fsdev.so 00:04:12.197 SO libspdk_event_iobuf.so.3.0 00:04:12.197 SYMLINK libspdk_event_vmd.so 00:04:12.197 SYMLINK libspdk_event_vfu_tgt.so 00:04:12.197 SYMLINK libspdk_event_vhost_blk.so 00:04:12.197 SYMLINK libspdk_event_iobuf.so 00:04:12.456 CC module/event/subsystems/accel/accel.o 00:04:12.456 LIB libspdk_event_accel.a 00:04:12.717 SO libspdk_event_accel.so.6.0 00:04:12.717 SYMLINK libspdk_event_accel.so 00:04:12.979 CC module/event/subsystems/bdev/bdev.o 00:04:12.979 LIB libspdk_event_bdev.a 00:04:12.979 SO libspdk_event_bdev.so.6.0 00:04:13.238 SYMLINK libspdk_event_bdev.so 00:04:13.238 CC module/event/subsystems/nbd/nbd.o 00:04:13.238 CC module/event/subsystems/scsi/scsi.o 00:04:13.238 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:13.238 CC module/event/subsystems/ublk/ublk.o 00:04:13.238 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:13.498 LIB libspdk_event_nbd.a 00:04:13.498 LIB libspdk_event_ublk.a 00:04:13.498 SO libspdk_event_nbd.so.6.0 00:04:13.498 LIB libspdk_event_scsi.a 00:04:13.498 SO libspdk_event_ublk.so.3.0 00:04:13.498 SO libspdk_event_scsi.so.6.0 00:04:13.498 SYMLINK libspdk_event_nbd.so 00:04:13.498 SYMLINK libspdk_event_ublk.so 00:04:13.498 SYMLINK libspdk_event_scsi.so 00:04:13.498 LIB libspdk_event_nvmf.a 00:04:13.498 SO libspdk_event_nvmf.so.6.0 00:04:13.498 SYMLINK libspdk_event_nvmf.so 00:04:13.759 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:13.759 CC module/event/subsystems/iscsi/iscsi.o 00:04:13.759 LIB libspdk_event_vhost_scsi.a 00:04:13.759 LIB libspdk_event_iscsi.a 00:04:13.759 SO libspdk_event_vhost_scsi.so.3.0 00:04:13.759 SO libspdk_event_iscsi.so.6.0 00:04:14.019 SYMLINK libspdk_event_vhost_scsi.so 00:04:14.019 SYMLINK libspdk_event_iscsi.so 00:04:14.019 SO libspdk.so.6.0 00:04:14.019 SYMLINK libspdk.so 00:04:14.278 CXX app/trace/trace.o 00:04:14.278 CC app/trace_record/trace_record.o 00:04:14.278 TEST_HEADER include/spdk/accel.h 00:04:14.278 TEST_HEADER include/spdk/accel_module.h 00:04:14.278 TEST_HEADER include/spdk/assert.h 00:04:14.278 TEST_HEADER include/spdk/barrier.h 00:04:14.278 TEST_HEADER include/spdk/base64.h 00:04:14.278 TEST_HEADER include/spdk/bdev.h 00:04:14.278 TEST_HEADER include/spdk/bdev_module.h 00:04:14.279 TEST_HEADER include/spdk/bdev_zone.h 00:04:14.279 TEST_HEADER include/spdk/bit_array.h 00:04:14.279 TEST_HEADER include/spdk/bit_pool.h 00:04:14.279 TEST_HEADER include/spdk/blob_bdev.h 00:04:14.279 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:14.279 TEST_HEADER include/spdk/blobfs.h 00:04:14.279 TEST_HEADER include/spdk/blob.h 00:04:14.279 TEST_HEADER include/spdk/conf.h 00:04:14.279 TEST_HEADER include/spdk/config.h 00:04:14.279 TEST_HEADER include/spdk/cpuset.h 00:04:14.279 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:14.279 TEST_HEADER include/spdk/crc16.h 00:04:14.279 TEST_HEADER include/spdk/crc32.h 00:04:14.279 TEST_HEADER include/spdk/crc64.h 00:04:14.279 TEST_HEADER include/spdk/dif.h 00:04:14.279 TEST_HEADER include/spdk/dma.h 00:04:14.279 TEST_HEADER include/spdk/endian.h 00:04:14.279 TEST_HEADER include/spdk/env_dpdk.h 00:04:14.279 TEST_HEADER include/spdk/env.h 00:04:14.279 TEST_HEADER include/spdk/event.h 00:04:14.279 TEST_HEADER include/spdk/fd_group.h 00:04:14.279 CC examples/util/zipf/zipf.o 00:04:14.279 TEST_HEADER include/spdk/fd.h 00:04:14.279 TEST_HEADER include/spdk/file.h 00:04:14.279 TEST_HEADER include/spdk/fsdev.h 00:04:14.279 TEST_HEADER include/spdk/fsdev_module.h 00:04:14.279 TEST_HEADER include/spdk/ftl.h 00:04:14.279 CC examples/ioat/perf/perf.o 00:04:14.279 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:14.279 TEST_HEADER include/spdk/gpt_spec.h 00:04:14.279 TEST_HEADER include/spdk/hexlify.h 00:04:14.279 TEST_HEADER include/spdk/histogram_data.h 00:04:14.279 CC test/thread/poller_perf/poller_perf.o 00:04:14.279 TEST_HEADER include/spdk/idxd.h 00:04:14.279 TEST_HEADER include/spdk/idxd_spec.h 00:04:14.279 TEST_HEADER include/spdk/init.h 00:04:14.279 TEST_HEADER include/spdk/ioat.h 00:04:14.279 TEST_HEADER include/spdk/ioat_spec.h 00:04:14.279 TEST_HEADER include/spdk/iscsi_spec.h 00:04:14.279 TEST_HEADER include/spdk/json.h 00:04:14.279 TEST_HEADER include/spdk/jsonrpc.h 00:04:14.279 TEST_HEADER include/spdk/keyring.h 00:04:14.279 TEST_HEADER include/spdk/keyring_module.h 00:04:14.279 CC test/dma/test_dma/test_dma.o 00:04:14.279 TEST_HEADER include/spdk/likely.h 00:04:14.279 TEST_HEADER include/spdk/log.h 00:04:14.279 CC test/app/bdev_svc/bdev_svc.o 00:04:14.279 TEST_HEADER include/spdk/lvol.h 00:04:14.279 TEST_HEADER include/spdk/md5.h 00:04:14.279 TEST_HEADER include/spdk/memory.h 00:04:14.279 TEST_HEADER include/spdk/mmio.h 00:04:14.538 TEST_HEADER include/spdk/nbd.h 00:04:14.538 TEST_HEADER include/spdk/net.h 00:04:14.538 TEST_HEADER include/spdk/notify.h 00:04:14.538 TEST_HEADER include/spdk/nvme.h 00:04:14.538 TEST_HEADER include/spdk/nvme_intel.h 00:04:14.538 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:14.538 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:14.538 TEST_HEADER include/spdk/nvme_spec.h 00:04:14.538 TEST_HEADER include/spdk/nvme_zns.h 00:04:14.538 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:14.538 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:14.538 TEST_HEADER include/spdk/nvmf.h 00:04:14.538 TEST_HEADER include/spdk/nvmf_spec.h 00:04:14.538 TEST_HEADER include/spdk/nvmf_transport.h 00:04:14.538 TEST_HEADER include/spdk/opal.h 00:04:14.538 TEST_HEADER include/spdk/opal_spec.h 00:04:14.538 TEST_HEADER include/spdk/pci_ids.h 00:04:14.538 TEST_HEADER include/spdk/pipe.h 00:04:14.538 TEST_HEADER include/spdk/queue.h 00:04:14.538 TEST_HEADER include/spdk/reduce.h 00:04:14.538 TEST_HEADER include/spdk/rpc.h 00:04:14.538 TEST_HEADER include/spdk/scheduler.h 00:04:14.538 TEST_HEADER include/spdk/scsi.h 00:04:14.538 TEST_HEADER include/spdk/scsi_spec.h 00:04:14.539 TEST_HEADER include/spdk/sock.h 00:04:14.539 TEST_HEADER include/spdk/stdinc.h 00:04:14.539 TEST_HEADER include/spdk/string.h 00:04:14.539 TEST_HEADER include/spdk/thread.h 00:04:14.539 TEST_HEADER include/spdk/trace.h 00:04:14.539 TEST_HEADER include/spdk/trace_parser.h 00:04:14.539 TEST_HEADER include/spdk/tree.h 00:04:14.539 TEST_HEADER include/spdk/ublk.h 00:04:14.539 TEST_HEADER include/spdk/util.h 00:04:14.539 TEST_HEADER include/spdk/uuid.h 00:04:14.539 TEST_HEADER include/spdk/version.h 00:04:14.539 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:14.539 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:14.539 TEST_HEADER include/spdk/vhost.h 00:04:14.539 TEST_HEADER include/spdk/vmd.h 00:04:14.539 TEST_HEADER include/spdk/xor.h 00:04:14.539 TEST_HEADER include/spdk/zipf.h 00:04:14.539 CXX test/cpp_headers/accel.o 00:04:14.539 LINK interrupt_tgt 00:04:14.539 LINK zipf 00:04:14.539 LINK spdk_trace_record 00:04:14.539 LINK poller_perf 00:04:14.539 LINK ioat_perf 00:04:14.539 LINK bdev_svc 00:04:14.539 LINK spdk_trace 00:04:14.539 CXX test/cpp_headers/accel_module.o 00:04:14.539 CXX test/cpp_headers/assert.o 00:04:14.799 CC app/nvmf_tgt/nvmf_main.o 00:04:14.799 CC examples/ioat/verify/verify.o 00:04:14.799 CXX test/cpp_headers/barrier.o 00:04:14.799 CC test/env/vtophys/vtophys.o 00:04:14.799 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:14.799 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:14.799 CC test/env/mem_callbacks/mem_callbacks.o 00:04:14.799 CC test/app/histogram_perf/histogram_perf.o 00:04:15.060 CXX test/cpp_headers/base64.o 00:04:15.060 LINK nvmf_tgt 00:04:15.060 LINK test_dma 00:04:15.060 LINK vtophys 00:04:15.060 LINK verify 00:04:15.060 LINK histogram_perf 00:04:15.060 CXX test/cpp_headers/bdev.o 00:04:15.060 CXX test/cpp_headers/bdev_module.o 00:04:15.321 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:15.321 CC test/rpc_client/rpc_client_test.o 00:04:15.321 CC test/app/jsoncat/jsoncat.o 00:04:15.321 CC app/iscsi_tgt/iscsi_tgt.o 00:04:15.321 CXX test/cpp_headers/bdev_zone.o 00:04:15.321 LINK nvme_fuzz 00:04:15.322 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:15.581 LINK jsoncat 00:04:15.581 LINK mem_callbacks 00:04:15.581 CXX test/cpp_headers/bit_array.o 00:04:15.581 LINK rpc_client_test 00:04:15.581 CC test/app/stub/stub.o 00:04:15.581 LINK iscsi_tgt 00:04:15.581 CXX test/cpp_headers/bit_pool.o 00:04:15.581 CC app/spdk_tgt/spdk_tgt.o 00:04:15.581 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:15.840 LINK stub 00:04:15.840 CXX test/cpp_headers/blob_bdev.o 00:04:15.840 CC test/event/event_perf/event_perf.o 00:04:15.840 LINK vhost_fuzz 00:04:15.840 CC test/nvme/aer/aer.o 00:04:15.840 LINK env_dpdk_post_init 00:04:15.840 LINK spdk_tgt 00:04:15.840 CXX test/cpp_headers/blobfs_bdev.o 00:04:15.840 CC test/accel/dif/dif.o 00:04:15.840 LINK event_perf 00:04:16.100 CC app/spdk_lspci/spdk_lspci.o 00:04:16.100 CC test/nvme/reset/reset.o 00:04:16.100 CC test/env/memory/memory_ut.o 00:04:16.100 CXX test/cpp_headers/blobfs.o 00:04:16.100 CXX test/cpp_headers/blob.o 00:04:16.100 LINK spdk_lspci 00:04:16.100 LINK aer 00:04:16.100 CC test/event/reactor/reactor.o 00:04:16.361 CXX test/cpp_headers/conf.o 00:04:16.361 LINK reset 00:04:16.361 CC app/spdk_nvme_perf/perf.o 00:04:16.361 LINK reactor 00:04:16.361 CC app/spdk_nvme_identify/identify.o 00:04:16.361 CC test/nvme/sgl/sgl.o 00:04:16.361 CXX test/cpp_headers/config.o 00:04:16.620 CXX test/cpp_headers/cpuset.o 00:04:16.620 CC test/nvme/e2edp/nvme_dp.o 00:04:16.620 CC test/event/reactor_perf/reactor_perf.o 00:04:16.620 CXX test/cpp_headers/crc16.o 00:04:16.620 LINK dif 00:04:16.880 LINK iscsi_fuzz 00:04:16.880 LINK sgl 00:04:16.880 CXX test/cpp_headers/crc32.o 00:04:16.880 LINK reactor_perf 00:04:16.880 LINK nvme_dp 00:04:17.139 CXX test/cpp_headers/crc64.o 00:04:17.139 CC test/nvme/overhead/overhead.o 00:04:17.139 CC test/nvme/err_injection/err_injection.o 00:04:17.139 CC test/event/app_repeat/app_repeat.o 00:04:17.139 CC test/blobfs/mkfs/mkfs.o 00:04:17.139 CXX test/cpp_headers/dif.o 00:04:17.139 CC test/lvol/esnap/esnap.o 00:04:17.434 LINK memory_ut 00:04:17.434 LINK app_repeat 00:04:17.434 LINK err_injection 00:04:17.434 LINK spdk_nvme_identify 00:04:17.434 CXX test/cpp_headers/dma.o 00:04:17.434 LINK spdk_nvme_perf 00:04:17.434 LINK mkfs 00:04:17.434 LINK overhead 00:04:17.694 CXX test/cpp_headers/endian.o 00:04:17.694 CXX test/cpp_headers/env_dpdk.o 00:04:17.694 CXX test/cpp_headers/env.o 00:04:17.694 CC test/env/pci/pci_ut.o 00:04:17.694 CC app/spdk_nvme_discover/discovery_aer.o 00:04:17.694 CC test/event/scheduler/scheduler.o 00:04:17.956 CC test/nvme/reserve/reserve.o 00:04:17.956 CC test/nvme/startup/startup.o 00:04:17.956 CXX test/cpp_headers/event.o 00:04:17.956 LINK spdk_nvme_discover 00:04:17.956 LINK scheduler 00:04:17.956 CC examples/thread/thread/thread_ex.o 00:04:18.217 CC test/bdev/bdevio/bdevio.o 00:04:18.217 CXX test/cpp_headers/fd_group.o 00:04:18.217 LINK reserve 00:04:18.217 LINK startup 00:04:18.217 LINK pci_ut 00:04:18.217 CC app/spdk_top/spdk_top.o 00:04:18.217 CXX test/cpp_headers/fd.o 00:04:18.477 LINK thread 00:04:18.477 CC test/nvme/simple_copy/simple_copy.o 00:04:18.477 CXX test/cpp_headers/file.o 00:04:18.477 CC examples/sock/hello_world/hello_sock.o 00:04:18.477 CXX test/cpp_headers/fsdev.o 00:04:18.477 CXX test/cpp_headers/fsdev_module.o 00:04:18.477 CC test/nvme/connect_stress/connect_stress.o 00:04:18.477 LINK bdevio 00:04:18.738 CXX test/cpp_headers/ftl.o 00:04:18.738 LINK simple_copy 00:04:18.738 LINK connect_stress 00:04:18.738 LINK hello_sock 00:04:18.738 CC test/nvme/boot_partition/boot_partition.o 00:04:19.000 CC examples/vmd/lsvmd/lsvmd.o 00:04:19.000 CXX test/cpp_headers/fuse_dispatcher.o 00:04:19.000 CC examples/idxd/perf/perf.o 00:04:19.000 LINK boot_partition 00:04:19.000 CC examples/vmd/led/led.o 00:04:19.000 LINK lsvmd 00:04:19.000 CC test/nvme/compliance/nvme_compliance.o 00:04:19.000 CXX test/cpp_headers/gpt_spec.o 00:04:19.000 LINK led 00:04:19.261 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:19.261 CC test/nvme/fused_ordering/fused_ordering.o 00:04:19.261 CXX test/cpp_headers/hexlify.o 00:04:19.261 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:19.261 LINK idxd_perf 00:04:19.261 LINK spdk_top 00:04:19.261 CC test/nvme/fdp/fdp.o 00:04:19.261 LINK nvme_compliance 00:04:19.261 CXX test/cpp_headers/histogram_data.o 00:04:19.261 CXX test/cpp_headers/idxd.o 00:04:19.521 LINK fused_ordering 00:04:19.521 LINK doorbell_aers 00:04:19.521 LINK hello_fsdev 00:04:19.521 CXX test/cpp_headers/idxd_spec.o 00:04:19.521 CC app/vhost/vhost.o 00:04:19.521 CC app/spdk_dd/spdk_dd.o 00:04:19.521 CXX test/cpp_headers/init.o 00:04:19.521 CC test/nvme/cuse/cuse.o 00:04:19.781 LINK fdp 00:04:19.781 CC examples/accel/perf/accel_perf.o 00:04:19.781 CC app/fio/nvme/fio_plugin.o 00:04:19.781 LINK vhost 00:04:19.781 CXX test/cpp_headers/ioat.o 00:04:19.781 CC examples/blob/hello_world/hello_blob.o 00:04:20.041 CC app/fio/bdev/fio_plugin.o 00:04:20.041 LINK spdk_dd 00:04:20.041 CXX test/cpp_headers/ioat_spec.o 00:04:20.041 LINK hello_blob 00:04:20.041 CC examples/nvme/hello_world/hello_world.o 00:04:20.041 CC examples/nvme/reconnect/reconnect.o 00:04:20.041 CXX test/cpp_headers/iscsi_spec.o 00:04:20.302 LINK accel_perf 00:04:20.302 CXX test/cpp_headers/json.o 00:04:20.302 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:20.302 LINK hello_world 00:04:20.302 LINK spdk_nvme 00:04:20.302 CXX test/cpp_headers/jsonrpc.o 00:04:20.302 CC examples/blob/cli/blobcli.o 00:04:20.563 CXX test/cpp_headers/keyring.o 00:04:20.563 LINK spdk_bdev 00:04:20.563 LINK reconnect 00:04:20.563 CXX test/cpp_headers/keyring_module.o 00:04:20.563 CXX test/cpp_headers/likely.o 00:04:20.563 CC examples/nvme/arbitration/arbitration.o 00:04:20.563 CC examples/nvme/hotplug/hotplug.o 00:04:20.563 CC examples/bdev/hello_world/hello_bdev.o 00:04:20.563 CXX test/cpp_headers/log.o 00:04:20.834 CXX test/cpp_headers/lvol.o 00:04:20.834 CC examples/bdev/bdevperf/bdevperf.o 00:04:20.834 CXX test/cpp_headers/md5.o 00:04:20.834 CXX test/cpp_headers/memory.o 00:04:20.834 LINK nvme_manage 00:04:20.834 LINK hotplug 00:04:20.834 LINK hello_bdev 00:04:20.834 LINK arbitration 00:04:20.834 LINK cuse 00:04:21.096 CXX test/cpp_headers/mmio.o 00:04:21.096 LINK blobcli 00:04:21.096 CXX test/cpp_headers/nbd.o 00:04:21.096 CXX test/cpp_headers/net.o 00:04:21.096 CXX test/cpp_headers/notify.o 00:04:21.096 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:21.096 CXX test/cpp_headers/nvme.o 00:04:21.096 CC examples/nvme/abort/abort.o 00:04:21.096 CXX test/cpp_headers/nvme_intel.o 00:04:21.096 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:21.096 CXX test/cpp_headers/nvme_ocssd.o 00:04:21.096 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:21.096 CXX test/cpp_headers/nvme_spec.o 00:04:21.355 CXX test/cpp_headers/nvme_zns.o 00:04:21.355 CXX test/cpp_headers/nvmf_cmd.o 00:04:21.355 LINK cmb_copy 00:04:21.355 LINK pmr_persistence 00:04:21.355 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:21.355 CXX test/cpp_headers/nvmf.o 00:04:21.355 CXX test/cpp_headers/nvmf_spec.o 00:04:21.355 CXX test/cpp_headers/nvmf_transport.o 00:04:21.355 CXX test/cpp_headers/opal.o 00:04:21.355 CXX test/cpp_headers/opal_spec.o 00:04:21.355 CXX test/cpp_headers/pci_ids.o 00:04:21.614 LINK abort 00:04:21.614 CXX test/cpp_headers/pipe.o 00:04:21.614 CXX test/cpp_headers/queue.o 00:04:21.614 CXX test/cpp_headers/reduce.o 00:04:21.614 CXX test/cpp_headers/rpc.o 00:04:21.614 LINK bdevperf 00:04:21.614 CXX test/cpp_headers/scheduler.o 00:04:21.614 CXX test/cpp_headers/scsi.o 00:04:21.614 CXX test/cpp_headers/scsi_spec.o 00:04:21.614 CXX test/cpp_headers/sock.o 00:04:21.614 CXX test/cpp_headers/stdinc.o 00:04:21.614 CXX test/cpp_headers/string.o 00:04:21.614 CXX test/cpp_headers/thread.o 00:04:21.614 CXX test/cpp_headers/trace.o 00:04:21.614 CXX test/cpp_headers/trace_parser.o 00:04:21.614 CXX test/cpp_headers/tree.o 00:04:21.875 CXX test/cpp_headers/ublk.o 00:04:21.875 CXX test/cpp_headers/util.o 00:04:21.875 CXX test/cpp_headers/uuid.o 00:04:21.875 CXX test/cpp_headers/version.o 00:04:21.875 CXX test/cpp_headers/vfio_user_pci.o 00:04:21.875 CXX test/cpp_headers/vfio_user_spec.o 00:04:21.875 CXX test/cpp_headers/vhost.o 00:04:21.875 CXX test/cpp_headers/vmd.o 00:04:21.875 CXX test/cpp_headers/xor.o 00:04:21.875 CXX test/cpp_headers/zipf.o 00:04:21.875 CC examples/nvmf/nvmf/nvmf.o 00:04:22.137 LINK nvmf 00:04:23.077 LINK esnap 00:04:23.649 00:04:23.649 real 1m12.783s 00:04:23.649 user 6m56.177s 00:04:23.649 sys 1m12.844s 00:04:23.649 14:08:50 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:04:23.649 ************************************ 00:04:23.649 END TEST make 00:04:23.649 ************************************ 00:04:23.649 14:08:50 make -- common/autotest_common.sh@10 -- $ set +x 00:04:23.649 14:08:51 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:23.649 14:08:51 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:23.649 14:08:51 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:23.649 14:08:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:23.649 14:08:51 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:23.649 14:08:51 -- pm/common@44 -- $ pid=5030 00:04:23.649 14:08:51 -- pm/common@50 -- $ kill -TERM 5030 00:04:23.649 14:08:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:23.649 14:08:51 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:23.649 14:08:51 -- pm/common@44 -- $ pid=5031 00:04:23.649 14:08:51 -- pm/common@50 -- $ kill -TERM 5031 00:04:23.649 14:08:51 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:23.649 14:08:51 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:23.649 14:08:51 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:23.649 14:08:51 -- common/autotest_common.sh@1691 -- # lcov --version 00:04:23.649 14:08:51 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:23.649 14:08:51 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:23.649 14:08:51 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:23.649 14:08:51 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:23.649 14:08:51 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:23.649 14:08:51 -- scripts/common.sh@336 -- # IFS=.-: 00:04:23.649 14:08:51 -- scripts/common.sh@336 -- # read -ra ver1 00:04:23.649 14:08:51 -- scripts/common.sh@337 -- # IFS=.-: 00:04:23.649 14:08:51 -- scripts/common.sh@337 -- # read -ra ver2 00:04:23.649 14:08:51 -- scripts/common.sh@338 -- # local 'op=<' 00:04:23.649 14:08:51 -- scripts/common.sh@340 -- # ver1_l=2 00:04:23.649 14:08:51 -- scripts/common.sh@341 -- # ver2_l=1 00:04:23.649 14:08:51 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:23.649 14:08:51 -- scripts/common.sh@344 -- # case "$op" in 00:04:23.649 14:08:51 -- scripts/common.sh@345 -- # : 1 00:04:23.649 14:08:51 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:23.649 14:08:51 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:23.649 14:08:51 -- scripts/common.sh@365 -- # decimal 1 00:04:23.649 14:08:51 -- scripts/common.sh@353 -- # local d=1 00:04:23.649 14:08:51 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:23.649 14:08:51 -- scripts/common.sh@355 -- # echo 1 00:04:23.649 14:08:51 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:23.649 14:08:51 -- scripts/common.sh@366 -- # decimal 2 00:04:23.649 14:08:51 -- scripts/common.sh@353 -- # local d=2 00:04:23.649 14:08:51 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:23.649 14:08:51 -- scripts/common.sh@355 -- # echo 2 00:04:23.649 14:08:51 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:23.649 14:08:51 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:23.649 14:08:51 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:23.649 14:08:51 -- scripts/common.sh@368 -- # return 0 00:04:23.649 14:08:51 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:23.649 14:08:51 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:23.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.649 --rc genhtml_branch_coverage=1 00:04:23.649 --rc genhtml_function_coverage=1 00:04:23.649 --rc genhtml_legend=1 00:04:23.649 --rc geninfo_all_blocks=1 00:04:23.649 --rc geninfo_unexecuted_blocks=1 00:04:23.649 00:04:23.649 ' 00:04:23.650 14:08:51 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:23.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.650 --rc genhtml_branch_coverage=1 00:04:23.650 --rc genhtml_function_coverage=1 00:04:23.650 --rc genhtml_legend=1 00:04:23.650 --rc geninfo_all_blocks=1 00:04:23.650 --rc geninfo_unexecuted_blocks=1 00:04:23.650 00:04:23.650 ' 00:04:23.650 14:08:51 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:23.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.650 --rc genhtml_branch_coverage=1 00:04:23.650 --rc genhtml_function_coverage=1 00:04:23.650 --rc genhtml_legend=1 00:04:23.650 --rc geninfo_all_blocks=1 00:04:23.650 --rc geninfo_unexecuted_blocks=1 00:04:23.650 00:04:23.650 ' 00:04:23.650 14:08:51 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:23.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.650 --rc genhtml_branch_coverage=1 00:04:23.650 --rc genhtml_function_coverage=1 00:04:23.650 --rc genhtml_legend=1 00:04:23.650 --rc geninfo_all_blocks=1 00:04:23.650 --rc geninfo_unexecuted_blocks=1 00:04:23.650 00:04:23.650 ' 00:04:23.650 14:08:51 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:23.650 14:08:51 -- nvmf/common.sh@7 -- # uname -s 00:04:23.650 14:08:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:23.650 14:08:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:23.650 14:08:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:23.650 14:08:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:23.650 14:08:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:23.650 14:08:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:23.650 14:08:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:23.650 14:08:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:23.650 14:08:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:23.650 14:08:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:23.650 14:08:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:04:23.650 14:08:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:04:23.650 14:08:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:23.650 14:08:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:23.650 14:08:51 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:04:23.650 14:08:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:23.650 14:08:51 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:23.650 14:08:51 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:23.650 14:08:51 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:23.650 14:08:51 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:23.650 14:08:51 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:23.650 14:08:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:23.650 14:08:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:23.650 14:08:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:23.650 14:08:51 -- paths/export.sh@5 -- # export PATH 00:04:23.650 14:08:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:23.650 14:08:51 -- nvmf/common.sh@51 -- # : 0 00:04:23.650 14:08:51 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:23.650 14:08:51 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:23.650 14:08:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:23.650 14:08:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:23.650 14:08:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:23.650 14:08:51 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:23.650 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:23.650 14:08:51 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:23.650 14:08:51 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:23.650 14:08:51 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:23.650 14:08:51 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:23.650 14:08:51 -- spdk/autotest.sh@32 -- # uname -s 00:04:23.650 14:08:51 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:23.650 14:08:51 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:23.650 14:08:51 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:23.650 14:08:51 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:23.650 14:08:51 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:23.650 14:08:51 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:23.910 14:08:51 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:23.910 14:08:51 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:23.910 14:08:51 -- spdk/autotest.sh@48 -- # udevadm_pid=56167 00:04:23.910 14:08:51 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:23.910 14:08:51 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:23.910 14:08:51 -- pm/common@17 -- # local monitor 00:04:23.910 14:08:51 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:23.910 14:08:51 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:23.910 14:08:51 -- pm/common@25 -- # sleep 1 00:04:23.910 14:08:51 -- pm/common@21 -- # date +%s 00:04:23.910 14:08:51 -- pm/common@21 -- # date +%s 00:04:23.911 14:08:51 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1730902131 00:04:23.911 14:08:51 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1730902131 00:04:23.911 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1730902131_collect-cpu-load.pm.log 00:04:23.911 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1730902131_collect-vmstat.pm.log 00:04:24.853 14:08:52 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:24.853 14:08:52 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:24.853 14:08:52 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:24.853 14:08:52 -- common/autotest_common.sh@10 -- # set +x 00:04:24.853 14:08:52 -- spdk/autotest.sh@59 -- # create_test_list 00:04:24.853 14:08:52 -- common/autotest_common.sh@750 -- # xtrace_disable 00:04:24.853 14:08:52 -- common/autotest_common.sh@10 -- # set +x 00:04:24.853 14:08:52 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:24.853 14:08:52 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:24.853 14:08:52 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:24.853 14:08:52 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:24.853 14:08:52 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:24.853 14:08:52 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:24.853 14:08:52 -- common/autotest_common.sh@1455 -- # uname 00:04:24.853 14:08:52 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:24.853 14:08:52 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:24.853 14:08:52 -- common/autotest_common.sh@1475 -- # uname 00:04:24.853 14:08:52 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:24.853 14:08:52 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:24.853 14:08:52 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:24.853 lcov: LCOV version 1.15 00:04:24.853 14:08:52 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:39.763 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:39.763 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:57.870 14:09:24 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:57.870 14:09:24 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:57.870 14:09:24 -- common/autotest_common.sh@10 -- # set +x 00:04:57.870 14:09:24 -- spdk/autotest.sh@78 -- # rm -f 00:04:57.871 14:09:24 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:57.871 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:57.871 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:57.871 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:57.871 14:09:24 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:57.871 14:09:24 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:57.871 14:09:24 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:57.871 14:09:24 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:57.871 14:09:24 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:57.871 14:09:24 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:57.871 14:09:24 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:57.871 14:09:24 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:57.871 14:09:24 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:57.871 14:09:24 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:57.871 14:09:24 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:04:57.871 14:09:24 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:04:57.871 14:09:24 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:57.871 14:09:24 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:57.871 14:09:24 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:57.871 14:09:24 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:04:57.871 14:09:24 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:04:57.871 14:09:24 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:57.871 14:09:24 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:57.871 14:09:24 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:57.871 14:09:24 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:04:57.871 14:09:24 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:04:57.871 14:09:24 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:57.871 14:09:24 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:57.871 14:09:24 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:57.871 14:09:24 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:57.871 14:09:24 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:57.871 14:09:24 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:57.871 14:09:24 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:57.871 14:09:24 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:57.871 No valid GPT data, bailing 00:04:57.871 14:09:24 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:57.871 14:09:24 -- scripts/common.sh@394 -- # pt= 00:04:57.871 14:09:24 -- scripts/common.sh@395 -- # return 1 00:04:57.871 14:09:24 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:57.871 1+0 records in 00:04:57.871 1+0 records out 00:04:57.871 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0078148 s, 134 MB/s 00:04:57.871 14:09:24 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:57.871 14:09:24 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:57.871 14:09:24 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:57.871 14:09:24 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:57.871 14:09:24 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:57.871 No valid GPT data, bailing 00:04:57.871 14:09:24 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:57.871 14:09:24 -- scripts/common.sh@394 -- # pt= 00:04:57.871 14:09:24 -- scripts/common.sh@395 -- # return 1 00:04:57.871 14:09:24 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:57.871 1+0 records in 00:04:57.871 1+0 records out 00:04:57.871 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00632854 s, 166 MB/s 00:04:57.871 14:09:24 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:57.871 14:09:24 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:57.871 14:09:24 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:04:57.871 14:09:24 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:04:57.871 14:09:24 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:57.871 No valid GPT data, bailing 00:04:57.871 14:09:24 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:57.871 14:09:24 -- scripts/common.sh@394 -- # pt= 00:04:57.871 14:09:24 -- scripts/common.sh@395 -- # return 1 00:04:57.871 14:09:24 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:57.871 1+0 records in 00:04:57.871 1+0 records out 00:04:57.871 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0051365 s, 204 MB/s 00:04:57.871 14:09:24 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:57.871 14:09:24 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:57.871 14:09:24 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:04:57.871 14:09:24 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:04:57.871 14:09:24 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:57.871 No valid GPT data, bailing 00:04:57.871 14:09:25 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:57.871 14:09:25 -- scripts/common.sh@394 -- # pt= 00:04:57.871 14:09:25 -- scripts/common.sh@395 -- # return 1 00:04:57.871 14:09:25 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:57.871 1+0 records in 00:04:57.871 1+0 records out 00:04:57.871 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00699029 s, 150 MB/s 00:04:57.871 14:09:25 -- spdk/autotest.sh@105 -- # sync 00:04:57.871 14:09:25 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:57.871 14:09:25 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:57.871 14:09:25 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:59.266 14:09:26 -- spdk/autotest.sh@111 -- # uname -s 00:04:59.267 14:09:26 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:59.267 14:09:26 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:59.267 14:09:26 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:59.854 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:59.854 Hugepages 00:04:59.854 node hugesize free / total 00:04:59.854 node0 1048576kB 0 / 0 00:04:59.854 node0 2048kB 0 / 0 00:04:59.854 00:04:59.854 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:59.854 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:59.854 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:59.854 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:59.854 14:09:27 -- spdk/autotest.sh@117 -- # uname -s 00:04:59.854 14:09:27 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:59.854 14:09:27 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:59.854 14:09:27 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:00.420 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:00.684 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:00.684 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:00.684 14:09:28 -- common/autotest_common.sh@1515 -- # sleep 1 00:05:01.641 14:09:29 -- common/autotest_common.sh@1516 -- # bdfs=() 00:05:01.641 14:09:29 -- common/autotest_common.sh@1516 -- # local bdfs 00:05:01.641 14:09:29 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:05:01.641 14:09:29 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:05:01.641 14:09:29 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:01.641 14:09:29 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:01.641 14:09:29 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:01.641 14:09:29 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:01.641 14:09:29 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:01.641 14:09:29 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:05:01.641 14:09:29 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:01.641 14:09:29 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:01.898 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:01.898 Waiting for block devices as requested 00:05:02.155 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:02.155 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:02.155 14:09:29 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:02.155 14:09:29 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:02.155 14:09:29 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:02.155 14:09:29 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:05:02.155 14:09:29 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:02.155 14:09:29 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:02.155 14:09:29 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:02.155 14:09:29 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:05:02.155 14:09:29 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:05:02.155 14:09:29 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:05:02.155 14:09:29 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:05:02.155 14:09:29 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:02.155 14:09:29 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:02.155 14:09:29 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:02.155 14:09:29 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:02.155 14:09:29 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:02.155 14:09:29 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:05:02.155 14:09:29 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:02.155 14:09:29 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:02.155 14:09:29 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:02.155 14:09:29 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:02.155 14:09:29 -- common/autotest_common.sh@1541 -- # continue 00:05:02.156 14:09:29 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:02.156 14:09:29 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:02.156 14:09:29 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:02.156 14:09:29 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:05:02.156 14:09:29 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:02.156 14:09:29 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:02.156 14:09:29 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:02.156 14:09:29 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:05:02.156 14:09:29 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:05:02.156 14:09:29 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:05:02.156 14:09:29 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:05:02.156 14:09:29 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:02.156 14:09:29 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:02.156 14:09:29 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:02.156 14:09:29 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:02.156 14:09:29 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:02.156 14:09:29 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:05:02.156 14:09:29 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:02.156 14:09:29 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:02.156 14:09:29 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:02.156 14:09:29 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:02.156 14:09:29 -- common/autotest_common.sh@1541 -- # continue 00:05:02.156 14:09:29 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:02.156 14:09:29 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:02.156 14:09:29 -- common/autotest_common.sh@10 -- # set +x 00:05:02.156 14:09:29 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:02.156 14:09:29 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:02.156 14:09:29 -- common/autotest_common.sh@10 -- # set +x 00:05:02.156 14:09:29 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:02.719 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:02.976 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:02.976 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:02.976 14:09:30 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:02.976 14:09:30 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:02.976 14:09:30 -- common/autotest_common.sh@10 -- # set +x 00:05:02.976 14:09:30 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:02.976 14:09:30 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:02.976 14:09:30 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:02.976 14:09:30 -- common/autotest_common.sh@1561 -- # bdfs=() 00:05:02.976 14:09:30 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:05:02.976 14:09:30 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:05:02.976 14:09:30 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:05:02.976 14:09:30 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:05:02.976 14:09:30 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:02.976 14:09:30 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:02.976 14:09:30 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:02.976 14:09:30 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:02.976 14:09:30 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:02.976 14:09:30 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:05:02.976 14:09:30 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:02.976 14:09:30 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:02.976 14:09:30 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:02.976 14:09:30 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:02.976 14:09:30 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:02.976 14:09:30 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:02.976 14:09:30 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:02.976 14:09:30 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:02.976 14:09:30 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:02.976 14:09:30 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:05:02.976 14:09:30 -- common/autotest_common.sh@1570 -- # return 0 00:05:02.976 14:09:30 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:02.976 14:09:30 -- common/autotest_common.sh@1578 -- # return 0 00:05:02.976 14:09:30 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:02.976 14:09:30 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:02.976 14:09:30 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:02.976 14:09:30 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:02.976 14:09:30 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:02.976 14:09:30 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:02.976 14:09:30 -- common/autotest_common.sh@10 -- # set +x 00:05:02.976 14:09:30 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:02.976 14:09:30 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:02.976 14:09:30 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:02.976 14:09:30 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:02.976 14:09:30 -- common/autotest_common.sh@10 -- # set +x 00:05:02.976 ************************************ 00:05:02.976 START TEST env 00:05:02.976 ************************************ 00:05:02.976 14:09:30 env -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:03.234 * Looking for test storage... 00:05:03.234 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:03.234 14:09:30 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:03.234 14:09:30 env -- common/autotest_common.sh@1691 -- # lcov --version 00:05:03.234 14:09:30 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:03.234 14:09:30 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:03.234 14:09:30 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:03.234 14:09:30 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:03.234 14:09:30 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:03.234 14:09:30 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.234 14:09:30 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:03.234 14:09:30 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:03.234 14:09:30 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:03.234 14:09:30 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:03.234 14:09:30 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:03.234 14:09:30 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:03.234 14:09:30 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:03.234 14:09:30 env -- scripts/common.sh@344 -- # case "$op" in 00:05:03.234 14:09:30 env -- scripts/common.sh@345 -- # : 1 00:05:03.234 14:09:30 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:03.234 14:09:30 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.234 14:09:30 env -- scripts/common.sh@365 -- # decimal 1 00:05:03.234 14:09:30 env -- scripts/common.sh@353 -- # local d=1 00:05:03.234 14:09:30 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.234 14:09:30 env -- scripts/common.sh@355 -- # echo 1 00:05:03.234 14:09:30 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:03.234 14:09:30 env -- scripts/common.sh@366 -- # decimal 2 00:05:03.234 14:09:30 env -- scripts/common.sh@353 -- # local d=2 00:05:03.234 14:09:30 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.234 14:09:30 env -- scripts/common.sh@355 -- # echo 2 00:05:03.234 14:09:30 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:03.234 14:09:30 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:03.234 14:09:30 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:03.234 14:09:30 env -- scripts/common.sh@368 -- # return 0 00:05:03.234 14:09:30 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.234 14:09:30 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:03.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.234 --rc genhtml_branch_coverage=1 00:05:03.234 --rc genhtml_function_coverage=1 00:05:03.234 --rc genhtml_legend=1 00:05:03.234 --rc geninfo_all_blocks=1 00:05:03.234 --rc geninfo_unexecuted_blocks=1 00:05:03.234 00:05:03.234 ' 00:05:03.234 14:09:30 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:03.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.234 --rc genhtml_branch_coverage=1 00:05:03.234 --rc genhtml_function_coverage=1 00:05:03.234 --rc genhtml_legend=1 00:05:03.234 --rc geninfo_all_blocks=1 00:05:03.234 --rc geninfo_unexecuted_blocks=1 00:05:03.234 00:05:03.234 ' 00:05:03.234 14:09:30 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:03.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.234 --rc genhtml_branch_coverage=1 00:05:03.234 --rc genhtml_function_coverage=1 00:05:03.234 --rc genhtml_legend=1 00:05:03.234 --rc geninfo_all_blocks=1 00:05:03.234 --rc geninfo_unexecuted_blocks=1 00:05:03.234 00:05:03.234 ' 00:05:03.234 14:09:30 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:03.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.234 --rc genhtml_branch_coverage=1 00:05:03.234 --rc genhtml_function_coverage=1 00:05:03.234 --rc genhtml_legend=1 00:05:03.234 --rc geninfo_all_blocks=1 00:05:03.234 --rc geninfo_unexecuted_blocks=1 00:05:03.234 00:05:03.234 ' 00:05:03.234 14:09:30 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:03.234 14:09:30 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:03.234 14:09:30 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:03.234 14:09:30 env -- common/autotest_common.sh@10 -- # set +x 00:05:03.234 ************************************ 00:05:03.234 START TEST env_memory 00:05:03.234 ************************************ 00:05:03.234 14:09:30 env.env_memory -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:03.234 00:05:03.234 00:05:03.234 CUnit - A unit testing framework for C - Version 2.1-3 00:05:03.234 http://cunit.sourceforge.net/ 00:05:03.234 00:05:03.234 00:05:03.234 Suite: memory 00:05:03.234 Test: alloc and free memory map ...[2024-11-06 14:09:30.792408] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:03.234 passed 00:05:03.234 Test: mem map translation ...[2024-11-06 14:09:30.831101] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:03.234 [2024-11-06 14:09:30.831160] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:03.234 [2024-11-06 14:09:30.831221] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:03.234 [2024-11-06 14:09:30.831237] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:03.492 passed 00:05:03.492 Test: mem map registration ...[2024-11-06 14:09:30.899230] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:03.492 [2024-11-06 14:09:30.899278] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:03.492 passed 00:05:03.492 Test: mem map adjacent registrations ...passed 00:05:03.492 00:05:03.492 Run Summary: Type Total Ran Passed Failed Inactive 00:05:03.492 suites 1 1 n/a 0 0 00:05:03.492 tests 4 4 4 0 0 00:05:03.492 asserts 152 152 152 0 n/a 00:05:03.492 00:05:03.492 Elapsed time = 0.238 seconds 00:05:03.492 00:05:03.492 real 0m0.274s 00:05:03.492 user 0m0.250s 00:05:03.492 sys 0m0.017s 00:05:03.492 14:09:31 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:03.492 14:09:31 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:03.492 ************************************ 00:05:03.492 END TEST env_memory 00:05:03.492 ************************************ 00:05:03.492 14:09:31 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:03.492 14:09:31 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:03.492 14:09:31 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:03.492 14:09:31 env -- common/autotest_common.sh@10 -- # set +x 00:05:03.492 ************************************ 00:05:03.492 START TEST env_vtophys 00:05:03.492 ************************************ 00:05:03.492 14:09:31 env.env_vtophys -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:03.492 EAL: lib.eal log level changed from notice to debug 00:05:03.492 EAL: Detected lcore 0 as core 0 on socket 0 00:05:03.492 EAL: Detected lcore 1 as core 0 on socket 0 00:05:03.492 EAL: Detected lcore 2 as core 0 on socket 0 00:05:03.492 EAL: Detected lcore 3 as core 0 on socket 0 00:05:03.492 EAL: Detected lcore 4 as core 0 on socket 0 00:05:03.492 EAL: Detected lcore 5 as core 0 on socket 0 00:05:03.492 EAL: Detected lcore 6 as core 0 on socket 0 00:05:03.492 EAL: Detected lcore 7 as core 0 on socket 0 00:05:03.492 EAL: Detected lcore 8 as core 0 on socket 0 00:05:03.492 EAL: Detected lcore 9 as core 0 on socket 0 00:05:03.492 EAL: Maximum logical cores by configuration: 128 00:05:03.492 EAL: Detected CPU lcores: 10 00:05:03.492 EAL: Detected NUMA nodes: 1 00:05:03.492 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:03.492 EAL: Detected shared linkage of DPDK 00:05:03.492 EAL: No shared files mode enabled, IPC will be disabled 00:05:03.492 EAL: Selected IOVA mode 'PA' 00:05:03.492 EAL: Probing VFIO support... 00:05:03.492 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:03.492 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:03.492 EAL: Ask a virtual area of 0x2e000 bytes 00:05:03.492 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:03.492 EAL: Setting up physically contiguous memory... 00:05:03.492 EAL: Setting maximum number of open files to 524288 00:05:03.492 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:03.492 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:03.492 EAL: Ask a virtual area of 0x61000 bytes 00:05:03.492 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:03.492 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:03.492 EAL: Ask a virtual area of 0x400000000 bytes 00:05:03.492 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:03.492 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:03.492 EAL: Ask a virtual area of 0x61000 bytes 00:05:03.492 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:03.492 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:03.492 EAL: Ask a virtual area of 0x400000000 bytes 00:05:03.492 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:03.492 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:03.492 EAL: Ask a virtual area of 0x61000 bytes 00:05:03.492 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:03.492 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:03.492 EAL: Ask a virtual area of 0x400000000 bytes 00:05:03.492 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:03.492 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:03.492 EAL: Ask a virtual area of 0x61000 bytes 00:05:03.492 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:03.492 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:03.492 EAL: Ask a virtual area of 0x400000000 bytes 00:05:03.492 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:03.492 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:03.492 EAL: Hugepages will be freed exactly as allocated. 00:05:03.492 EAL: No shared files mode enabled, IPC is disabled 00:05:03.492 EAL: No shared files mode enabled, IPC is disabled 00:05:03.807 EAL: TSC frequency is ~2600000 KHz 00:05:03.807 EAL: Main lcore 0 is ready (tid=7f56918f8a40;cpuset=[0]) 00:05:03.807 EAL: Trying to obtain current memory policy. 00:05:03.807 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.807 EAL: Restoring previous memory policy: 0 00:05:03.807 EAL: request: mp_malloc_sync 00:05:03.807 EAL: No shared files mode enabled, IPC is disabled 00:05:03.807 EAL: Heap on socket 0 was expanded by 2MB 00:05:03.807 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:03.807 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:03.807 EAL: Mem event callback 'spdk:(nil)' registered 00:05:03.807 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:03.807 00:05:03.807 00:05:03.807 CUnit - A unit testing framework for C - Version 2.1-3 00:05:03.807 http://cunit.sourceforge.net/ 00:05:03.807 00:05:03.807 00:05:03.807 Suite: components_suite 00:05:04.066 Test: vtophys_malloc_test ...passed 00:05:04.066 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:04.066 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.066 EAL: Restoring previous memory policy: 4 00:05:04.066 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.066 EAL: request: mp_malloc_sync 00:05:04.066 EAL: No shared files mode enabled, IPC is disabled 00:05:04.066 EAL: Heap on socket 0 was expanded by 4MB 00:05:04.066 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.066 EAL: request: mp_malloc_sync 00:05:04.066 EAL: No shared files mode enabled, IPC is disabled 00:05:04.066 EAL: Heap on socket 0 was shrunk by 4MB 00:05:04.066 EAL: Trying to obtain current memory policy. 00:05:04.066 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.066 EAL: Restoring previous memory policy: 4 00:05:04.066 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.066 EAL: request: mp_malloc_sync 00:05:04.066 EAL: No shared files mode enabled, IPC is disabled 00:05:04.066 EAL: Heap on socket 0 was expanded by 6MB 00:05:04.066 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.066 EAL: request: mp_malloc_sync 00:05:04.066 EAL: No shared files mode enabled, IPC is disabled 00:05:04.066 EAL: Heap on socket 0 was shrunk by 6MB 00:05:04.066 EAL: Trying to obtain current memory policy. 00:05:04.066 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.066 EAL: Restoring previous memory policy: 4 00:05:04.066 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.066 EAL: request: mp_malloc_sync 00:05:04.066 EAL: No shared files mode enabled, IPC is disabled 00:05:04.066 EAL: Heap on socket 0 was expanded by 10MB 00:05:04.066 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.066 EAL: request: mp_malloc_sync 00:05:04.066 EAL: No shared files mode enabled, IPC is disabled 00:05:04.066 EAL: Heap on socket 0 was shrunk by 10MB 00:05:04.066 EAL: Trying to obtain current memory policy. 00:05:04.066 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.066 EAL: Restoring previous memory policy: 4 00:05:04.066 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.066 EAL: request: mp_malloc_sync 00:05:04.066 EAL: No shared files mode enabled, IPC is disabled 00:05:04.066 EAL: Heap on socket 0 was expanded by 18MB 00:05:04.066 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.066 EAL: request: mp_malloc_sync 00:05:04.066 EAL: No shared files mode enabled, IPC is disabled 00:05:04.066 EAL: Heap on socket 0 was shrunk by 18MB 00:05:04.066 EAL: Trying to obtain current memory policy. 00:05:04.066 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.066 EAL: Restoring previous memory policy: 4 00:05:04.066 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.066 EAL: request: mp_malloc_sync 00:05:04.066 EAL: No shared files mode enabled, IPC is disabled 00:05:04.066 EAL: Heap on socket 0 was expanded by 34MB 00:05:04.066 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.066 EAL: request: mp_malloc_sync 00:05:04.066 EAL: No shared files mode enabled, IPC is disabled 00:05:04.066 EAL: Heap on socket 0 was shrunk by 34MB 00:05:04.066 EAL: Trying to obtain current memory policy. 00:05:04.066 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.323 EAL: Restoring previous memory policy: 4 00:05:04.323 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.323 EAL: request: mp_malloc_sync 00:05:04.323 EAL: No shared files mode enabled, IPC is disabled 00:05:04.323 EAL: Heap on socket 0 was expanded by 66MB 00:05:04.323 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.323 EAL: request: mp_malloc_sync 00:05:04.323 EAL: No shared files mode enabled, IPC is disabled 00:05:04.323 EAL: Heap on socket 0 was shrunk by 66MB 00:05:04.323 EAL: Trying to obtain current memory policy. 00:05:04.323 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.323 EAL: Restoring previous memory policy: 4 00:05:04.323 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.323 EAL: request: mp_malloc_sync 00:05:04.323 EAL: No shared files mode enabled, IPC is disabled 00:05:04.323 EAL: Heap on socket 0 was expanded by 130MB 00:05:04.581 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.581 EAL: request: mp_malloc_sync 00:05:04.581 EAL: No shared files mode enabled, IPC is disabled 00:05:04.581 EAL: Heap on socket 0 was shrunk by 130MB 00:05:04.581 EAL: Trying to obtain current memory policy. 00:05:04.581 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.581 EAL: Restoring previous memory policy: 4 00:05:04.581 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.581 EAL: request: mp_malloc_sync 00:05:04.581 EAL: No shared files mode enabled, IPC is disabled 00:05:04.581 EAL: Heap on socket 0 was expanded by 258MB 00:05:05.146 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.146 EAL: request: mp_malloc_sync 00:05:05.146 EAL: No shared files mode enabled, IPC is disabled 00:05:05.146 EAL: Heap on socket 0 was shrunk by 258MB 00:05:05.146 EAL: Trying to obtain current memory policy. 00:05:05.146 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.403 EAL: Restoring previous memory policy: 4 00:05:05.403 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.403 EAL: request: mp_malloc_sync 00:05:05.403 EAL: No shared files mode enabled, IPC is disabled 00:05:05.403 EAL: Heap on socket 0 was expanded by 514MB 00:05:05.968 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.968 EAL: request: mp_malloc_sync 00:05:05.968 EAL: No shared files mode enabled, IPC is disabled 00:05:05.968 EAL: Heap on socket 0 was shrunk by 514MB 00:05:06.533 EAL: Trying to obtain current memory policy. 00:05:06.533 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.533 EAL: Restoring previous memory policy: 4 00:05:06.533 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.533 EAL: request: mp_malloc_sync 00:05:06.533 EAL: No shared files mode enabled, IPC is disabled 00:05:06.533 EAL: Heap on socket 0 was expanded by 1026MB 00:05:07.916 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.916 EAL: request: mp_malloc_sync 00:05:07.916 EAL: No shared files mode enabled, IPC is disabled 00:05:07.916 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:08.486 passed 00:05:08.486 00:05:08.486 Run Summary: Type Total Ran Passed Failed Inactive 00:05:08.486 suites 1 1 n/a 0 0 00:05:08.486 tests 2 2 2 0 0 00:05:08.486 asserts 5831 5831 5831 0 n/a 00:05:08.486 00:05:08.486 Elapsed time = 4.840 seconds 00:05:08.486 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.486 EAL: request: mp_malloc_sync 00:05:08.486 EAL: No shared files mode enabled, IPC is disabled 00:05:08.486 EAL: Heap on socket 0 was shrunk by 2MB 00:05:08.744 EAL: No shared files mode enabled, IPC is disabled 00:05:08.744 EAL: No shared files mode enabled, IPC is disabled 00:05:08.744 EAL: No shared files mode enabled, IPC is disabled 00:05:08.744 00:05:08.744 real 0m5.094s 00:05:08.744 user 0m4.325s 00:05:08.744 sys 0m0.626s 00:05:08.744 14:09:36 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:08.744 14:09:36 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:08.744 ************************************ 00:05:08.744 END TEST env_vtophys 00:05:08.744 ************************************ 00:05:08.744 14:09:36 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:08.744 14:09:36 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:08.744 14:09:36 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:08.744 14:09:36 env -- common/autotest_common.sh@10 -- # set +x 00:05:08.744 ************************************ 00:05:08.744 START TEST env_pci 00:05:08.744 ************************************ 00:05:08.744 14:09:36 env.env_pci -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:08.744 00:05:08.744 00:05:08.744 CUnit - A unit testing framework for C - Version 2.1-3 00:05:08.744 http://cunit.sourceforge.net/ 00:05:08.744 00:05:08.744 00:05:08.744 Suite: pci 00:05:08.744 Test: pci_hook ...[2024-11-06 14:09:36.216474] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58447 has claimed it 00:05:08.744 passed 00:05:08.744 00:05:08.744 Run Summary: Type Total Ran Passed Failed Inactive 00:05:08.744 suites 1 1 n/a 0 0 00:05:08.744 tests 1 1 1 0 0 00:05:08.744 asserts 25 25 25 0 n/a 00:05:08.744 00:05:08.744 Elapsed time = 0.005 seconds 00:05:08.744 EAL: Cannot find device (10000:00:01.0) 00:05:08.744 EAL: Failed to attach device on primary process 00:05:08.744 00:05:08.744 real 0m0.063s 00:05:08.744 user 0m0.028s 00:05:08.744 sys 0m0.034s 00:05:08.744 ************************************ 00:05:08.744 END TEST env_pci 00:05:08.744 ************************************ 00:05:08.744 14:09:36 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:08.744 14:09:36 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:08.744 14:09:36 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:08.744 14:09:36 env -- env/env.sh@15 -- # uname 00:05:08.744 14:09:36 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:08.744 14:09:36 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:08.744 14:09:36 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:08.744 14:09:36 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:05:08.744 14:09:36 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:08.744 14:09:36 env -- common/autotest_common.sh@10 -- # set +x 00:05:08.744 ************************************ 00:05:08.744 START TEST env_dpdk_post_init 00:05:08.744 ************************************ 00:05:08.744 14:09:36 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:08.744 EAL: Detected CPU lcores: 10 00:05:08.744 EAL: Detected NUMA nodes: 1 00:05:08.744 EAL: Detected shared linkage of DPDK 00:05:08.744 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:08.744 EAL: Selected IOVA mode 'PA' 00:05:09.002 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:09.002 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:09.002 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:09.002 Starting DPDK initialization... 00:05:09.002 Starting SPDK post initialization... 00:05:09.002 SPDK NVMe probe 00:05:09.002 Attaching to 0000:00:10.0 00:05:09.002 Attaching to 0000:00:11.0 00:05:09.002 Attached to 0000:00:10.0 00:05:09.002 Attached to 0000:00:11.0 00:05:09.002 Cleaning up... 00:05:09.002 00:05:09.002 real 0m0.243s 00:05:09.002 user 0m0.077s 00:05:09.002 sys 0m0.065s 00:05:09.002 14:09:36 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:09.002 14:09:36 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:09.002 ************************************ 00:05:09.002 END TEST env_dpdk_post_init 00:05:09.002 ************************************ 00:05:09.002 14:09:36 env -- env/env.sh@26 -- # uname 00:05:09.002 14:09:36 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:09.002 14:09:36 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:09.002 14:09:36 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:09.002 14:09:36 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:09.002 14:09:36 env -- common/autotest_common.sh@10 -- # set +x 00:05:09.002 ************************************ 00:05:09.002 START TEST env_mem_callbacks 00:05:09.002 ************************************ 00:05:09.002 14:09:36 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:09.002 EAL: Detected CPU lcores: 10 00:05:09.002 EAL: Detected NUMA nodes: 1 00:05:09.002 EAL: Detected shared linkage of DPDK 00:05:09.002 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:09.002 EAL: Selected IOVA mode 'PA' 00:05:09.260 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:09.260 00:05:09.260 00:05:09.260 CUnit - A unit testing framework for C - Version 2.1-3 00:05:09.260 http://cunit.sourceforge.net/ 00:05:09.260 00:05:09.260 00:05:09.260 Suite: memory 00:05:09.260 Test: test ... 00:05:09.260 register 0x200000200000 2097152 00:05:09.260 malloc 3145728 00:05:09.260 register 0x200000400000 4194304 00:05:09.260 buf 0x2000004fffc0 len 3145728 PASSED 00:05:09.260 malloc 64 00:05:09.260 buf 0x2000004ffec0 len 64 PASSED 00:05:09.260 malloc 4194304 00:05:09.260 register 0x200000800000 6291456 00:05:09.260 buf 0x2000009fffc0 len 4194304 PASSED 00:05:09.260 free 0x2000004fffc0 3145728 00:05:09.260 free 0x2000004ffec0 64 00:05:09.260 unregister 0x200000400000 4194304 PASSED 00:05:09.260 free 0x2000009fffc0 4194304 00:05:09.260 unregister 0x200000800000 6291456 PASSED 00:05:09.260 malloc 8388608 00:05:09.260 register 0x200000400000 10485760 00:05:09.260 buf 0x2000005fffc0 len 8388608 PASSED 00:05:09.260 free 0x2000005fffc0 8388608 00:05:09.260 unregister 0x200000400000 10485760 PASSED 00:05:09.260 passed 00:05:09.260 00:05:09.260 Run Summary: Type Total Ran Passed Failed Inactive 00:05:09.260 suites 1 1 n/a 0 0 00:05:09.260 tests 1 1 1 0 0 00:05:09.260 asserts 15 15 15 0 n/a 00:05:09.260 00:05:09.260 Elapsed time = 0.041 seconds 00:05:09.260 00:05:09.260 real 0m0.209s 00:05:09.260 user 0m0.060s 00:05:09.260 sys 0m0.048s 00:05:09.260 14:09:36 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:09.260 ************************************ 00:05:09.260 END TEST env_mem_callbacks 00:05:09.260 ************************************ 00:05:09.260 14:09:36 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:09.260 00:05:09.260 real 0m6.264s 00:05:09.260 user 0m4.908s 00:05:09.260 sys 0m1.007s 00:05:09.260 14:09:36 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:09.260 14:09:36 env -- common/autotest_common.sh@10 -- # set +x 00:05:09.260 ************************************ 00:05:09.260 END TEST env 00:05:09.260 ************************************ 00:05:09.260 14:09:36 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:09.260 14:09:36 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:09.260 14:09:36 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:09.260 14:09:36 -- common/autotest_common.sh@10 -- # set +x 00:05:09.260 ************************************ 00:05:09.260 START TEST rpc 00:05:09.260 ************************************ 00:05:09.260 14:09:36 rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:09.519 * Looking for test storage... 00:05:09.519 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:09.519 14:09:36 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:09.519 14:09:36 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:09.520 14:09:36 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:09.520 14:09:36 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:09.520 14:09:36 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:09.520 14:09:36 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:09.520 14:09:36 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:09.520 14:09:36 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:09.520 14:09:36 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:09.520 14:09:36 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:09.520 14:09:36 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:09.520 14:09:36 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:09.520 14:09:36 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:09.520 14:09:36 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:09.520 14:09:36 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:09.520 14:09:36 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:09.520 14:09:36 rpc -- scripts/common.sh@345 -- # : 1 00:05:09.520 14:09:36 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:09.520 14:09:36 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:09.520 14:09:36 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:09.520 14:09:36 rpc -- scripts/common.sh@353 -- # local d=1 00:05:09.520 14:09:36 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:09.520 14:09:36 rpc -- scripts/common.sh@355 -- # echo 1 00:05:09.520 14:09:36 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:09.520 14:09:36 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:09.520 14:09:36 rpc -- scripts/common.sh@353 -- # local d=2 00:05:09.520 14:09:36 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:09.520 14:09:36 rpc -- scripts/common.sh@355 -- # echo 2 00:05:09.520 14:09:36 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:09.520 14:09:36 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:09.520 14:09:36 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:09.520 14:09:36 rpc -- scripts/common.sh@368 -- # return 0 00:05:09.520 14:09:37 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:09.520 14:09:37 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:09.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.520 --rc genhtml_branch_coverage=1 00:05:09.520 --rc genhtml_function_coverage=1 00:05:09.520 --rc genhtml_legend=1 00:05:09.520 --rc geninfo_all_blocks=1 00:05:09.520 --rc geninfo_unexecuted_blocks=1 00:05:09.520 00:05:09.520 ' 00:05:09.520 14:09:37 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:09.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.520 --rc genhtml_branch_coverage=1 00:05:09.520 --rc genhtml_function_coverage=1 00:05:09.520 --rc genhtml_legend=1 00:05:09.520 --rc geninfo_all_blocks=1 00:05:09.520 --rc geninfo_unexecuted_blocks=1 00:05:09.520 00:05:09.520 ' 00:05:09.520 14:09:37 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:09.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.520 --rc genhtml_branch_coverage=1 00:05:09.520 --rc genhtml_function_coverage=1 00:05:09.520 --rc genhtml_legend=1 00:05:09.520 --rc geninfo_all_blocks=1 00:05:09.520 --rc geninfo_unexecuted_blocks=1 00:05:09.520 00:05:09.520 ' 00:05:09.520 14:09:37 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:09.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.520 --rc genhtml_branch_coverage=1 00:05:09.520 --rc genhtml_function_coverage=1 00:05:09.520 --rc genhtml_legend=1 00:05:09.520 --rc geninfo_all_blocks=1 00:05:09.520 --rc geninfo_unexecuted_blocks=1 00:05:09.520 00:05:09.520 ' 00:05:09.520 14:09:37 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58574 00:05:09.520 14:09:37 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:09.520 14:09:37 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58574 00:05:09.520 14:09:37 rpc -- common/autotest_common.sh@833 -- # '[' -z 58574 ']' 00:05:09.520 14:09:37 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:09.520 14:09:37 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.520 14:09:37 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:09.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.520 14:09:37 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.520 14:09:37 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:09.520 14:09:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.520 [2024-11-06 14:09:37.090508] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:05:09.520 [2024-11-06 14:09:37.090637] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58574 ] 00:05:09.788 [2024-11-06 14:09:37.247359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.788 [2024-11-06 14:09:37.347190] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:09.788 [2024-11-06 14:09:37.347249] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58574' to capture a snapshot of events at runtime. 00:05:09.788 [2024-11-06 14:09:37.347259] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:09.788 [2024-11-06 14:09:37.347272] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:09.788 [2024-11-06 14:09:37.347279] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58574 for offline analysis/debug. 00:05:09.788 [2024-11-06 14:09:37.348143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.352 14:09:37 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:10.352 14:09:37 rpc -- common/autotest_common.sh@866 -- # return 0 00:05:10.352 14:09:37 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:10.352 14:09:37 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:10.352 14:09:37 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:10.352 14:09:37 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:10.352 14:09:37 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:10.352 14:09:37 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:10.352 14:09:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.352 ************************************ 00:05:10.352 START TEST rpc_integrity 00:05:10.352 ************************************ 00:05:10.352 14:09:37 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:05:10.352 14:09:37 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:10.352 14:09:37 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.352 14:09:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.352 14:09:37 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.352 14:09:37 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:10.352 14:09:37 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:10.778 14:09:37 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:10.778 14:09:37 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:10.778 14:09:37 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.778 14:09:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.778 14:09:38 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.778 14:09:38 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:10.778 14:09:38 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:10.778 14:09:38 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.778 14:09:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.778 14:09:38 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.778 14:09:38 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:10.778 { 00:05:10.778 "aliases": [ 00:05:10.778 "6894d946-42f9-44e8-b343-3a8d191b311c" 00:05:10.778 ], 00:05:10.778 "assigned_rate_limits": { 00:05:10.778 "r_mbytes_per_sec": 0, 00:05:10.778 "rw_ios_per_sec": 0, 00:05:10.778 "rw_mbytes_per_sec": 0, 00:05:10.778 "w_mbytes_per_sec": 0 00:05:10.778 }, 00:05:10.778 "block_size": 512, 00:05:10.778 "claimed": false, 00:05:10.778 "driver_specific": {}, 00:05:10.778 "memory_domains": [ 00:05:10.778 { 00:05:10.778 "dma_device_id": "system", 00:05:10.778 "dma_device_type": 1 00:05:10.778 }, 00:05:10.778 { 00:05:10.778 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:10.778 "dma_device_type": 2 00:05:10.778 } 00:05:10.778 ], 00:05:10.778 "name": "Malloc0", 00:05:10.778 "num_blocks": 16384, 00:05:10.778 "product_name": "Malloc disk", 00:05:10.778 "supported_io_types": { 00:05:10.778 "abort": true, 00:05:10.778 "compare": false, 00:05:10.778 "compare_and_write": false, 00:05:10.778 "copy": true, 00:05:10.778 "flush": true, 00:05:10.778 "get_zone_info": false, 00:05:10.778 "nvme_admin": false, 00:05:10.778 "nvme_io": false, 00:05:10.778 "nvme_io_md": false, 00:05:10.778 "nvme_iov_md": false, 00:05:10.778 "read": true, 00:05:10.778 "reset": true, 00:05:10.778 "seek_data": false, 00:05:10.778 "seek_hole": false, 00:05:10.778 "unmap": true, 00:05:10.778 "write": true, 00:05:10.778 "write_zeroes": true, 00:05:10.778 "zcopy": true, 00:05:10.778 "zone_append": false, 00:05:10.778 "zone_management": false 00:05:10.778 }, 00:05:10.778 "uuid": "6894d946-42f9-44e8-b343-3a8d191b311c", 00:05:10.778 "zoned": false 00:05:10.778 } 00:05:10.778 ]' 00:05:10.778 14:09:38 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:10.778 14:09:38 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:10.778 14:09:38 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:10.778 14:09:38 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.778 14:09:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.778 [2024-11-06 14:09:38.073411] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:10.778 [2024-11-06 14:09:38.073485] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:10.778 [2024-11-06 14:09:38.073511] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:05:10.778 [2024-11-06 14:09:38.073523] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:10.778 [2024-11-06 14:09:38.075833] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:10.778 [2024-11-06 14:09:38.075881] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:10.778 Passthru0 00:05:10.778 14:09:38 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.778 14:09:38 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:10.778 14:09:38 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.778 14:09:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.778 14:09:38 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.779 14:09:38 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:10.779 { 00:05:10.779 "aliases": [ 00:05:10.779 "6894d946-42f9-44e8-b343-3a8d191b311c" 00:05:10.779 ], 00:05:10.779 "assigned_rate_limits": { 00:05:10.779 "r_mbytes_per_sec": 0, 00:05:10.779 "rw_ios_per_sec": 0, 00:05:10.779 "rw_mbytes_per_sec": 0, 00:05:10.779 "w_mbytes_per_sec": 0 00:05:10.779 }, 00:05:10.779 "block_size": 512, 00:05:10.779 "claim_type": "exclusive_write", 00:05:10.779 "claimed": true, 00:05:10.779 "driver_specific": {}, 00:05:10.779 "memory_domains": [ 00:05:10.779 { 00:05:10.779 "dma_device_id": "system", 00:05:10.779 "dma_device_type": 1 00:05:10.779 }, 00:05:10.779 { 00:05:10.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:10.779 "dma_device_type": 2 00:05:10.779 } 00:05:10.779 ], 00:05:10.779 "name": "Malloc0", 00:05:10.779 "num_blocks": 16384, 00:05:10.779 "product_name": "Malloc disk", 00:05:10.779 "supported_io_types": { 00:05:10.779 "abort": true, 00:05:10.779 "compare": false, 00:05:10.779 "compare_and_write": false, 00:05:10.779 "copy": true, 00:05:10.779 "flush": true, 00:05:10.779 "get_zone_info": false, 00:05:10.779 "nvme_admin": false, 00:05:10.779 "nvme_io": false, 00:05:10.779 "nvme_io_md": false, 00:05:10.779 "nvme_iov_md": false, 00:05:10.779 "read": true, 00:05:10.779 "reset": true, 00:05:10.779 "seek_data": false, 00:05:10.779 "seek_hole": false, 00:05:10.779 "unmap": true, 00:05:10.779 "write": true, 00:05:10.779 "write_zeroes": true, 00:05:10.779 "zcopy": true, 00:05:10.779 "zone_append": false, 00:05:10.779 "zone_management": false 00:05:10.779 }, 00:05:10.779 "uuid": "6894d946-42f9-44e8-b343-3a8d191b311c", 00:05:10.779 "zoned": false 00:05:10.779 }, 00:05:10.779 { 00:05:10.779 "aliases": [ 00:05:10.779 "caf19717-d924-5b0d-8c5b-c29acf73aebf" 00:05:10.779 ], 00:05:10.779 "assigned_rate_limits": { 00:05:10.779 "r_mbytes_per_sec": 0, 00:05:10.779 "rw_ios_per_sec": 0, 00:05:10.779 "rw_mbytes_per_sec": 0, 00:05:10.779 "w_mbytes_per_sec": 0 00:05:10.779 }, 00:05:10.779 "block_size": 512, 00:05:10.779 "claimed": false, 00:05:10.779 "driver_specific": { 00:05:10.779 "passthru": { 00:05:10.779 "base_bdev_name": "Malloc0", 00:05:10.779 "name": "Passthru0" 00:05:10.779 } 00:05:10.779 }, 00:05:10.779 "memory_domains": [ 00:05:10.779 { 00:05:10.779 "dma_device_id": "system", 00:05:10.779 "dma_device_type": 1 00:05:10.779 }, 00:05:10.779 { 00:05:10.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:10.779 "dma_device_type": 2 00:05:10.779 } 00:05:10.779 ], 00:05:10.779 "name": "Passthru0", 00:05:10.779 "num_blocks": 16384, 00:05:10.779 "product_name": "passthru", 00:05:10.779 "supported_io_types": { 00:05:10.779 "abort": true, 00:05:10.779 "compare": false, 00:05:10.779 "compare_and_write": false, 00:05:10.779 "copy": true, 00:05:10.779 "flush": true, 00:05:10.779 "get_zone_info": false, 00:05:10.779 "nvme_admin": false, 00:05:10.779 "nvme_io": false, 00:05:10.779 "nvme_io_md": false, 00:05:10.779 "nvme_iov_md": false, 00:05:10.779 "read": true, 00:05:10.779 "reset": true, 00:05:10.779 "seek_data": false, 00:05:10.779 "seek_hole": false, 00:05:10.779 "unmap": true, 00:05:10.779 "write": true, 00:05:10.779 "write_zeroes": true, 00:05:10.779 "zcopy": true, 00:05:10.779 "zone_append": false, 00:05:10.779 "zone_management": false 00:05:10.779 }, 00:05:10.779 "uuid": "caf19717-d924-5b0d-8c5b-c29acf73aebf", 00:05:10.779 "zoned": false 00:05:10.779 } 00:05:10.779 ]' 00:05:10.779 14:09:38 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:10.779 14:09:38 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:10.779 14:09:38 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:10.779 14:09:38 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.779 14:09:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.779 14:09:38 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.779 14:09:38 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:10.779 14:09:38 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.779 14:09:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.779 14:09:38 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.779 14:09:38 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:10.779 14:09:38 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.779 14:09:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.779 14:09:38 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.779 14:09:38 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:10.779 14:09:38 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:10.779 14:09:38 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:10.779 ************************************ 00:05:10.779 END TEST rpc_integrity 00:05:10.779 ************************************ 00:05:10.779 00:05:10.779 real 0m0.254s 00:05:10.779 user 0m0.123s 00:05:10.779 sys 0m0.040s 00:05:10.779 14:09:38 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:10.779 14:09:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.779 14:09:38 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:10.779 14:09:38 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:10.779 14:09:38 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:10.779 14:09:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.779 ************************************ 00:05:10.779 START TEST rpc_plugins 00:05:10.779 ************************************ 00:05:10.779 14:09:38 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:05:10.779 14:09:38 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:10.779 14:09:38 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.779 14:09:38 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:10.779 14:09:38 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.779 14:09:38 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:10.779 14:09:38 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:10.779 14:09:38 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.779 14:09:38 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:10.779 14:09:38 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.779 14:09:38 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:10.779 { 00:05:10.779 "aliases": [ 00:05:10.779 "5c1ae0b2-7325-4d27-ba65-02ef9ea8d173" 00:05:10.779 ], 00:05:10.779 "assigned_rate_limits": { 00:05:10.779 "r_mbytes_per_sec": 0, 00:05:10.779 "rw_ios_per_sec": 0, 00:05:10.779 "rw_mbytes_per_sec": 0, 00:05:10.779 "w_mbytes_per_sec": 0 00:05:10.779 }, 00:05:10.779 "block_size": 4096, 00:05:10.779 "claimed": false, 00:05:10.779 "driver_specific": {}, 00:05:10.779 "memory_domains": [ 00:05:10.779 { 00:05:10.779 "dma_device_id": "system", 00:05:10.779 "dma_device_type": 1 00:05:10.779 }, 00:05:10.779 { 00:05:10.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:10.779 "dma_device_type": 2 00:05:10.779 } 00:05:10.779 ], 00:05:10.779 "name": "Malloc1", 00:05:10.779 "num_blocks": 256, 00:05:10.779 "product_name": "Malloc disk", 00:05:10.779 "supported_io_types": { 00:05:10.779 "abort": true, 00:05:10.779 "compare": false, 00:05:10.779 "compare_and_write": false, 00:05:10.779 "copy": true, 00:05:10.779 "flush": true, 00:05:10.779 "get_zone_info": false, 00:05:10.779 "nvme_admin": false, 00:05:10.779 "nvme_io": false, 00:05:10.779 "nvme_io_md": false, 00:05:10.779 "nvme_iov_md": false, 00:05:10.779 "read": true, 00:05:10.779 "reset": true, 00:05:10.779 "seek_data": false, 00:05:10.779 "seek_hole": false, 00:05:10.779 "unmap": true, 00:05:10.779 "write": true, 00:05:10.779 "write_zeroes": true, 00:05:10.779 "zcopy": true, 00:05:10.779 "zone_append": false, 00:05:10.779 "zone_management": false 00:05:10.779 }, 00:05:10.779 "uuid": "5c1ae0b2-7325-4d27-ba65-02ef9ea8d173", 00:05:10.779 "zoned": false 00:05:10.779 } 00:05:10.779 ]' 00:05:10.779 14:09:38 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:10.779 14:09:38 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:10.779 14:09:38 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:10.779 14:09:38 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.779 14:09:38 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:10.779 14:09:38 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.779 14:09:38 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:10.779 14:09:38 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.779 14:09:38 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:10.779 14:09:38 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.779 14:09:38 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:10.779 14:09:38 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:10.779 14:09:38 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:10.779 00:05:10.779 real 0m0.117s 00:05:10.779 user 0m0.067s 00:05:10.779 sys 0m0.015s 00:05:10.779 14:09:38 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:10.779 ************************************ 00:05:10.779 END TEST rpc_plugins 00:05:10.779 ************************************ 00:05:10.779 14:09:38 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:10.779 14:09:38 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:10.779 14:09:38 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:10.779 14:09:38 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:10.779 14:09:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.779 ************************************ 00:05:10.779 START TEST rpc_trace_cmd_test 00:05:10.779 ************************************ 00:05:10.779 14:09:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:05:10.779 14:09:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:10.779 14:09:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:10.779 14:09:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.779 14:09:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:11.036 14:09:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.036 14:09:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:11.036 "bdev": { 00:05:11.036 "mask": "0x8", 00:05:11.036 "tpoint_mask": "0xffffffffffffffff" 00:05:11.036 }, 00:05:11.036 "bdev_nvme": { 00:05:11.036 "mask": "0x4000", 00:05:11.036 "tpoint_mask": "0x0" 00:05:11.036 }, 00:05:11.036 "bdev_raid": { 00:05:11.036 "mask": "0x20000", 00:05:11.036 "tpoint_mask": "0x0" 00:05:11.036 }, 00:05:11.036 "blob": { 00:05:11.036 "mask": "0x10000", 00:05:11.036 "tpoint_mask": "0x0" 00:05:11.036 }, 00:05:11.036 "blobfs": { 00:05:11.036 "mask": "0x80", 00:05:11.036 "tpoint_mask": "0x0" 00:05:11.036 }, 00:05:11.036 "dsa": { 00:05:11.036 "mask": "0x200", 00:05:11.036 "tpoint_mask": "0x0" 00:05:11.036 }, 00:05:11.036 "ftl": { 00:05:11.036 "mask": "0x40", 00:05:11.036 "tpoint_mask": "0x0" 00:05:11.036 }, 00:05:11.036 "iaa": { 00:05:11.036 "mask": "0x1000", 00:05:11.036 "tpoint_mask": "0x0" 00:05:11.036 }, 00:05:11.036 "iscsi_conn": { 00:05:11.036 "mask": "0x2", 00:05:11.036 "tpoint_mask": "0x0" 00:05:11.036 }, 00:05:11.036 "nvme_pcie": { 00:05:11.036 "mask": "0x800", 00:05:11.036 "tpoint_mask": "0x0" 00:05:11.036 }, 00:05:11.036 "nvme_tcp": { 00:05:11.036 "mask": "0x2000", 00:05:11.036 "tpoint_mask": "0x0" 00:05:11.036 }, 00:05:11.036 "nvmf_rdma": { 00:05:11.036 "mask": "0x10", 00:05:11.036 "tpoint_mask": "0x0" 00:05:11.036 }, 00:05:11.036 "nvmf_tcp": { 00:05:11.036 "mask": "0x20", 00:05:11.036 "tpoint_mask": "0x0" 00:05:11.036 }, 00:05:11.036 "scheduler": { 00:05:11.036 "mask": "0x40000", 00:05:11.036 "tpoint_mask": "0x0" 00:05:11.036 }, 00:05:11.036 "scsi": { 00:05:11.036 "mask": "0x4", 00:05:11.036 "tpoint_mask": "0x0" 00:05:11.036 }, 00:05:11.037 "sock": { 00:05:11.037 "mask": "0x8000", 00:05:11.037 "tpoint_mask": "0x0" 00:05:11.037 }, 00:05:11.037 "thread": { 00:05:11.037 "mask": "0x400", 00:05:11.037 "tpoint_mask": "0x0" 00:05:11.037 }, 00:05:11.037 "tpoint_group_mask": "0x8", 00:05:11.037 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58574" 00:05:11.037 }' 00:05:11.037 14:09:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:11.037 14:09:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:11.037 14:09:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:11.037 14:09:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:11.037 14:09:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:11.037 14:09:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:11.037 14:09:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:11.037 14:09:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:11.037 14:09:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:11.037 14:09:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:11.037 00:05:11.037 real 0m0.177s 00:05:11.037 user 0m0.147s 00:05:11.037 sys 0m0.023s 00:05:11.037 14:09:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:11.037 14:09:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:11.037 ************************************ 00:05:11.037 END TEST rpc_trace_cmd_test 00:05:11.037 ************************************ 00:05:11.037 14:09:38 rpc -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:05:11.037 14:09:38 rpc -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:05:11.037 14:09:38 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:11.037 14:09:38 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:11.037 14:09:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.037 ************************************ 00:05:11.037 START TEST go_rpc 00:05:11.037 ************************************ 00:05:11.037 14:09:38 rpc.go_rpc -- common/autotest_common.sh@1127 -- # go_rpc 00:05:11.037 14:09:38 rpc.go_rpc -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:11.037 14:09:38 rpc.go_rpc -- rpc/rpc.sh@51 -- # bdevs='[]' 00:05:11.037 14:09:38 rpc.go_rpc -- rpc/rpc.sh@52 -- # jq length 00:05:11.037 14:09:38 rpc.go_rpc -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:05:11.037 14:09:38 rpc.go_rpc -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:05:11.037 14:09:38 rpc.go_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.037 14:09:38 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.309 14:09:38 rpc.go_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.309 14:09:38 rpc.go_rpc -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:05:11.309 14:09:38 rpc.go_rpc -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:11.309 14:09:38 rpc.go_rpc -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["622e185f-eaa3-4551-9e60-44115b28808b"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"system","dma_device_type":1},{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"copy":true,"flush":true,"get_zone_info":false,"nvme_admin":false,"nvme_io":false,"nvme_io_md":false,"nvme_iov_md":false,"read":true,"reset":true,"seek_data":false,"seek_hole":false,"unmap":true,"write":true,"write_zeroes":true,"zcopy":true,"zone_append":false,"zone_management":false},"uuid":"622e185f-eaa3-4551-9e60-44115b28808b","zoned":false}]' 00:05:11.309 14:09:38 rpc.go_rpc -- rpc/rpc.sh@57 -- # jq length 00:05:11.309 14:09:38 rpc.go_rpc -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:05:11.309 14:09:38 rpc.go_rpc -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:11.309 14:09:38 rpc.go_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.309 14:09:38 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.309 14:09:38 rpc.go_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.309 14:09:38 rpc.go_rpc -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:11.309 14:09:38 rpc.go_rpc -- rpc/rpc.sh@60 -- # bdevs='[]' 00:05:11.309 14:09:38 rpc.go_rpc -- rpc/rpc.sh@61 -- # jq length 00:05:11.309 14:09:38 rpc.go_rpc -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:05:11.309 00:05:11.309 real 0m0.179s 00:05:11.309 user 0m0.095s 00:05:11.309 sys 0m0.032s 00:05:11.309 14:09:38 rpc.go_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:11.309 14:09:38 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.309 ************************************ 00:05:11.309 END TEST go_rpc 00:05:11.309 ************************************ 00:05:11.310 14:09:38 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:11.310 14:09:38 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:11.310 14:09:38 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:11.310 14:09:38 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:11.310 14:09:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.310 ************************************ 00:05:11.310 START TEST rpc_daemon_integrity 00:05:11.310 ************************************ 00:05:11.310 14:09:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:05:11.310 14:09:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:11.310 14:09:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.310 14:09:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.310 14:09:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.310 14:09:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:11.310 14:09:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:11.310 14:09:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:11.310 14:09:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:11.310 14:09:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.310 14:09:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.310 14:09:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.310 14:09:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:05:11.310 14:09:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:11.310 14:09:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.310 14:09:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.310 14:09:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.310 14:09:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:11.310 { 00:05:11.310 "aliases": [ 00:05:11.310 "18a24594-3974-4a0b-aebd-88f578e34940" 00:05:11.310 ], 00:05:11.310 "assigned_rate_limits": { 00:05:11.310 "r_mbytes_per_sec": 0, 00:05:11.310 "rw_ios_per_sec": 0, 00:05:11.310 "rw_mbytes_per_sec": 0, 00:05:11.310 "w_mbytes_per_sec": 0 00:05:11.310 }, 00:05:11.310 "block_size": 512, 00:05:11.310 "claimed": false, 00:05:11.310 "driver_specific": {}, 00:05:11.310 "memory_domains": [ 00:05:11.310 { 00:05:11.310 "dma_device_id": "system", 00:05:11.310 "dma_device_type": 1 00:05:11.310 }, 00:05:11.310 { 00:05:11.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:11.310 "dma_device_type": 2 00:05:11.310 } 00:05:11.310 ], 00:05:11.310 "name": "Malloc3", 00:05:11.310 "num_blocks": 16384, 00:05:11.310 "product_name": "Malloc disk", 00:05:11.310 "supported_io_types": { 00:05:11.310 "abort": true, 00:05:11.310 "compare": false, 00:05:11.310 "compare_and_write": false, 00:05:11.310 "copy": true, 00:05:11.310 "flush": true, 00:05:11.310 "get_zone_info": false, 00:05:11.310 "nvme_admin": false, 00:05:11.310 "nvme_io": false, 00:05:11.310 "nvme_io_md": false, 00:05:11.310 "nvme_iov_md": false, 00:05:11.310 "read": true, 00:05:11.310 "reset": true, 00:05:11.310 "seek_data": false, 00:05:11.310 "seek_hole": false, 00:05:11.310 "unmap": true, 00:05:11.310 "write": true, 00:05:11.310 "write_zeroes": true, 00:05:11.310 "zcopy": true, 00:05:11.310 "zone_append": false, 00:05:11.310 "zone_management": false 00:05:11.310 }, 00:05:11.310 "uuid": "18a24594-3974-4a0b-aebd-88f578e34940", 00:05:11.310 "zoned": false 00:05:11.310 } 00:05:11.310 ]' 00:05:11.310 14:09:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:11.310 14:09:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:11.310 14:09:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:05:11.310 14:09:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.310 14:09:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.568 [2024-11-06 14:09:38.943554] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:11.568 [2024-11-06 14:09:38.943637] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:11.568 [2024-11-06 14:09:38.943660] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:05:11.568 [2024-11-06 14:09:38.943670] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:11.568 [2024-11-06 14:09:38.945979] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:11.568 [2024-11-06 14:09:38.946027] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:11.568 Passthru0 00:05:11.568 14:09:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.568 14:09:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:11.568 14:09:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.568 14:09:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.568 14:09:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.568 14:09:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:11.568 { 00:05:11.568 "aliases": [ 00:05:11.568 "18a24594-3974-4a0b-aebd-88f578e34940" 00:05:11.568 ], 00:05:11.568 "assigned_rate_limits": { 00:05:11.568 "r_mbytes_per_sec": 0, 00:05:11.568 "rw_ios_per_sec": 0, 00:05:11.568 "rw_mbytes_per_sec": 0, 00:05:11.568 "w_mbytes_per_sec": 0 00:05:11.568 }, 00:05:11.568 "block_size": 512, 00:05:11.568 "claim_type": "exclusive_write", 00:05:11.568 "claimed": true, 00:05:11.568 "driver_specific": {}, 00:05:11.568 "memory_domains": [ 00:05:11.568 { 00:05:11.568 "dma_device_id": "system", 00:05:11.568 "dma_device_type": 1 00:05:11.568 }, 00:05:11.568 { 00:05:11.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:11.568 "dma_device_type": 2 00:05:11.568 } 00:05:11.568 ], 00:05:11.568 "name": "Malloc3", 00:05:11.568 "num_blocks": 16384, 00:05:11.568 "product_name": "Malloc disk", 00:05:11.568 "supported_io_types": { 00:05:11.568 "abort": true, 00:05:11.568 "compare": false, 00:05:11.568 "compare_and_write": false, 00:05:11.568 "copy": true, 00:05:11.568 "flush": true, 00:05:11.568 "get_zone_info": false, 00:05:11.568 "nvme_admin": false, 00:05:11.568 "nvme_io": false, 00:05:11.568 "nvme_io_md": false, 00:05:11.568 "nvme_iov_md": false, 00:05:11.568 "read": true, 00:05:11.568 "reset": true, 00:05:11.568 "seek_data": false, 00:05:11.568 "seek_hole": false, 00:05:11.568 "unmap": true, 00:05:11.568 "write": true, 00:05:11.568 "write_zeroes": true, 00:05:11.568 "zcopy": true, 00:05:11.568 "zone_append": false, 00:05:11.568 "zone_management": false 00:05:11.568 }, 00:05:11.568 "uuid": "18a24594-3974-4a0b-aebd-88f578e34940", 00:05:11.568 "zoned": false 00:05:11.568 }, 00:05:11.568 { 00:05:11.568 "aliases": [ 00:05:11.568 "e7e8ccd5-8b5b-5589-adb1-04e4ed936ad6" 00:05:11.568 ], 00:05:11.568 "assigned_rate_limits": { 00:05:11.568 "r_mbytes_per_sec": 0, 00:05:11.568 "rw_ios_per_sec": 0, 00:05:11.568 "rw_mbytes_per_sec": 0, 00:05:11.568 "w_mbytes_per_sec": 0 00:05:11.568 }, 00:05:11.568 "block_size": 512, 00:05:11.568 "claimed": false, 00:05:11.568 "driver_specific": { 00:05:11.568 "passthru": { 00:05:11.568 "base_bdev_name": "Malloc3", 00:05:11.568 "name": "Passthru0" 00:05:11.568 } 00:05:11.568 }, 00:05:11.568 "memory_domains": [ 00:05:11.568 { 00:05:11.568 "dma_device_id": "system", 00:05:11.568 "dma_device_type": 1 00:05:11.568 }, 00:05:11.568 { 00:05:11.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:11.568 "dma_device_type": 2 00:05:11.568 } 00:05:11.568 ], 00:05:11.568 "name": "Passthru0", 00:05:11.568 "num_blocks": 16384, 00:05:11.568 "product_name": "passthru", 00:05:11.568 "supported_io_types": { 00:05:11.568 "abort": true, 00:05:11.568 "compare": false, 00:05:11.568 "compare_and_write": false, 00:05:11.568 "copy": true, 00:05:11.568 "flush": true, 00:05:11.568 "get_zone_info": false, 00:05:11.568 "nvme_admin": false, 00:05:11.568 "nvme_io": false, 00:05:11.568 "nvme_io_md": false, 00:05:11.568 "nvme_iov_md": false, 00:05:11.568 "read": true, 00:05:11.568 "reset": true, 00:05:11.568 "seek_data": false, 00:05:11.568 "seek_hole": false, 00:05:11.568 "unmap": true, 00:05:11.568 "write": true, 00:05:11.568 "write_zeroes": true, 00:05:11.568 "zcopy": true, 00:05:11.568 "zone_append": false, 00:05:11.568 "zone_management": false 00:05:11.568 }, 00:05:11.568 "uuid": "e7e8ccd5-8b5b-5589-adb1-04e4ed936ad6", 00:05:11.568 "zoned": false 00:05:11.568 } 00:05:11.568 ]' 00:05:11.568 14:09:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:11.568 14:09:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:11.568 14:09:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:11.568 14:09:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.568 14:09:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.568 14:09:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.568 14:09:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:05:11.568 14:09:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.568 14:09:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.568 14:09:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.568 14:09:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:11.568 14:09:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.568 14:09:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.568 14:09:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.568 14:09:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:11.568 14:09:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:11.568 14:09:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:11.568 00:05:11.568 real 0m0.238s 00:05:11.568 user 0m0.119s 00:05:11.568 sys 0m0.037s 00:05:11.568 14:09:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:11.568 ************************************ 00:05:11.568 END TEST rpc_daemon_integrity 00:05:11.568 ************************************ 00:05:11.568 14:09:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.568 14:09:39 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:11.568 14:09:39 rpc -- rpc/rpc.sh@84 -- # killprocess 58574 00:05:11.568 14:09:39 rpc -- common/autotest_common.sh@952 -- # '[' -z 58574 ']' 00:05:11.568 14:09:39 rpc -- common/autotest_common.sh@956 -- # kill -0 58574 00:05:11.568 14:09:39 rpc -- common/autotest_common.sh@957 -- # uname 00:05:11.568 14:09:39 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:11.568 14:09:39 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58574 00:05:11.568 14:09:39 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:11.568 14:09:39 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:11.568 killing process with pid 58574 00:05:11.568 14:09:39 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58574' 00:05:11.568 14:09:39 rpc -- common/autotest_common.sh@971 -- # kill 58574 00:05:11.568 14:09:39 rpc -- common/autotest_common.sh@976 -- # wait 58574 00:05:13.467 00:05:13.467 real 0m3.718s 00:05:13.467 user 0m4.239s 00:05:13.467 sys 0m0.686s 00:05:13.467 14:09:40 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:13.467 14:09:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.467 ************************************ 00:05:13.467 END TEST rpc 00:05:13.467 ************************************ 00:05:13.467 14:09:40 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:13.467 14:09:40 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:13.467 14:09:40 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:13.467 14:09:40 -- common/autotest_common.sh@10 -- # set +x 00:05:13.467 ************************************ 00:05:13.467 START TEST skip_rpc 00:05:13.467 ************************************ 00:05:13.467 14:09:40 skip_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:13.467 * Looking for test storage... 00:05:13.467 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:13.467 14:09:40 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:13.467 14:09:40 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:13.467 14:09:40 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:13.467 14:09:40 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:13.467 14:09:40 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:13.467 14:09:40 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:13.467 14:09:40 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:13.467 14:09:40 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:13.467 14:09:40 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:13.467 14:09:40 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:13.467 14:09:40 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:13.467 14:09:40 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:13.467 14:09:40 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:13.467 14:09:40 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:13.467 14:09:40 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:13.467 14:09:40 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:13.467 14:09:40 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:13.467 14:09:40 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:13.467 14:09:40 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:13.467 14:09:40 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:13.467 14:09:40 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:13.467 14:09:40 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:13.467 14:09:40 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:13.467 14:09:40 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:13.467 14:09:40 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:13.467 14:09:40 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:13.467 14:09:40 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:13.467 14:09:40 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:13.467 14:09:40 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:13.467 14:09:40 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:13.467 14:09:40 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:13.467 14:09:40 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:13.467 14:09:40 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:13.467 14:09:40 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:13.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.467 --rc genhtml_branch_coverage=1 00:05:13.467 --rc genhtml_function_coverage=1 00:05:13.467 --rc genhtml_legend=1 00:05:13.467 --rc geninfo_all_blocks=1 00:05:13.467 --rc geninfo_unexecuted_blocks=1 00:05:13.467 00:05:13.467 ' 00:05:13.467 14:09:40 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:13.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.467 --rc genhtml_branch_coverage=1 00:05:13.467 --rc genhtml_function_coverage=1 00:05:13.467 --rc genhtml_legend=1 00:05:13.467 --rc geninfo_all_blocks=1 00:05:13.467 --rc geninfo_unexecuted_blocks=1 00:05:13.467 00:05:13.467 ' 00:05:13.467 14:09:40 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:13.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.467 --rc genhtml_branch_coverage=1 00:05:13.467 --rc genhtml_function_coverage=1 00:05:13.467 --rc genhtml_legend=1 00:05:13.467 --rc geninfo_all_blocks=1 00:05:13.467 --rc geninfo_unexecuted_blocks=1 00:05:13.467 00:05:13.467 ' 00:05:13.467 14:09:40 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:13.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.467 --rc genhtml_branch_coverage=1 00:05:13.467 --rc genhtml_function_coverage=1 00:05:13.467 --rc genhtml_legend=1 00:05:13.467 --rc geninfo_all_blocks=1 00:05:13.467 --rc geninfo_unexecuted_blocks=1 00:05:13.467 00:05:13.467 ' 00:05:13.467 14:09:40 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:13.467 14:09:40 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:13.467 14:09:40 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:13.467 14:09:40 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:13.467 14:09:40 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:13.467 14:09:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.467 ************************************ 00:05:13.467 START TEST skip_rpc 00:05:13.467 ************************************ 00:05:13.467 14:09:40 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:05:13.467 14:09:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58850 00:05:13.467 14:09:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:13.467 14:09:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:13.467 14:09:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:13.468 [2024-11-06 14:09:40.860265] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:05:13.468 [2024-11-06 14:09:40.860421] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58850 ] 00:05:13.468 [2024-11-06 14:09:41.028482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.726 [2024-11-06 14:09:41.156258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.007 14:09:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:19.007 14:09:45 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:19.007 14:09:45 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:19.007 14:09:45 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:19.007 14:09:45 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:19.007 14:09:45 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:19.007 14:09:45 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:19.007 14:09:45 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:19.007 14:09:45 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.007 14:09:45 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.007 2024/11/06 14:09:45 error on client creation, err: error during client creation for Unix socket, err: could not connect to a Unix socket on address /var/tmp/spdk.sock, err: dial unix /var/tmp/spdk.sock: connect: no such file or directory 00:05:19.007 14:09:45 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:19.007 14:09:45 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:19.007 14:09:45 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:19.007 14:09:45 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:19.007 14:09:45 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:19.007 14:09:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:19.007 14:09:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58850 00:05:19.007 14:09:45 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 58850 ']' 00:05:19.007 14:09:45 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 58850 00:05:19.007 14:09:45 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:05:19.007 14:09:45 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:19.007 14:09:45 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58850 00:05:19.007 killing process with pid 58850 00:05:19.007 14:09:45 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:19.007 14:09:45 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:19.007 14:09:45 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58850' 00:05:19.007 14:09:45 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 58850 00:05:19.007 14:09:45 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 58850 00:05:19.574 00:05:19.574 real 0m6.247s 00:05:19.574 user 0m5.846s 00:05:19.574 sys 0m0.284s 00:05:19.574 14:09:47 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:19.574 14:09:47 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.574 ************************************ 00:05:19.574 END TEST skip_rpc 00:05:19.574 ************************************ 00:05:19.574 14:09:47 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:19.574 14:09:47 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:19.574 14:09:47 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:19.574 14:09:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.574 ************************************ 00:05:19.574 START TEST skip_rpc_with_json 00:05:19.574 ************************************ 00:05:19.574 14:09:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:05:19.574 14:09:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:19.574 14:09:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58954 00:05:19.574 14:09:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:19.574 14:09:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58954 00:05:19.574 14:09:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:19.574 14:09:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 58954 ']' 00:05:19.574 14:09:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.574 14:09:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:19.574 14:09:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.574 14:09:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:19.574 14:09:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:19.574 [2024-11-06 14:09:47.121921] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:05:19.574 [2024-11-06 14:09:47.122043] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58954 ] 00:05:19.832 [2024-11-06 14:09:47.273965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.832 [2024-11-06 14:09:47.361654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.397 14:09:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:20.397 14:09:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:05:20.397 14:09:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:20.397 14:09:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.397 14:09:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:20.397 [2024-11-06 14:09:47.942683] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:20.397 2024/11/06 14:09:47 error on JSON-RPC call, method: nvmf_get_transports, params: map[trtype:tcp], err: error received for nvmf_get_transports method, err: Code=-19 Msg=No such device 00:05:20.397 request: 00:05:20.397 { 00:05:20.397 "method": "nvmf_get_transports", 00:05:20.397 "params": { 00:05:20.397 "trtype": "tcp" 00:05:20.397 } 00:05:20.397 } 00:05:20.397 Got JSON-RPC error response 00:05:20.397 GoRPCClient: error on JSON-RPC call 00:05:20.397 14:09:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:20.397 14:09:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:20.397 14:09:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.397 14:09:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:20.397 [2024-11-06 14:09:47.950784] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:20.397 14:09:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.397 14:09:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:20.397 14:09:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.397 14:09:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:20.663 14:09:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.663 14:09:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:20.663 { 00:05:20.663 "subsystems": [ 00:05:20.663 { 00:05:20.663 "subsystem": "fsdev", 00:05:20.663 "config": [ 00:05:20.663 { 00:05:20.663 "method": "fsdev_set_opts", 00:05:20.663 "params": { 00:05:20.663 "fsdev_io_cache_size": 256, 00:05:20.663 "fsdev_io_pool_size": 65535 00:05:20.663 } 00:05:20.663 } 00:05:20.663 ] 00:05:20.663 }, 00:05:20.663 { 00:05:20.663 "subsystem": "vfio_user_target", 00:05:20.663 "config": null 00:05:20.663 }, 00:05:20.663 { 00:05:20.663 "subsystem": "keyring", 00:05:20.663 "config": [] 00:05:20.663 }, 00:05:20.663 { 00:05:20.663 "subsystem": "iobuf", 00:05:20.663 "config": [ 00:05:20.663 { 00:05:20.663 "method": "iobuf_set_options", 00:05:20.663 "params": { 00:05:20.663 "enable_numa": false, 00:05:20.663 "large_bufsize": 135168, 00:05:20.663 "large_pool_count": 1024, 00:05:20.663 "small_bufsize": 8192, 00:05:20.663 "small_pool_count": 8192 00:05:20.663 } 00:05:20.663 } 00:05:20.663 ] 00:05:20.663 }, 00:05:20.663 { 00:05:20.663 "subsystem": "sock", 00:05:20.663 "config": [ 00:05:20.663 { 00:05:20.663 "method": "sock_set_default_impl", 00:05:20.663 "params": { 00:05:20.663 "impl_name": "posix" 00:05:20.663 } 00:05:20.663 }, 00:05:20.663 { 00:05:20.663 "method": "sock_impl_set_options", 00:05:20.663 "params": { 00:05:20.663 "enable_ktls": false, 00:05:20.663 "enable_placement_id": 0, 00:05:20.663 "enable_quickack": false, 00:05:20.663 "enable_recv_pipe": true, 00:05:20.663 "enable_zerocopy_send_client": false, 00:05:20.663 "enable_zerocopy_send_server": true, 00:05:20.663 "impl_name": "ssl", 00:05:20.663 "recv_buf_size": 4096, 00:05:20.663 "send_buf_size": 4096, 00:05:20.663 "tls_version": 0, 00:05:20.663 "zerocopy_threshold": 0 00:05:20.663 } 00:05:20.663 }, 00:05:20.663 { 00:05:20.663 "method": "sock_impl_set_options", 00:05:20.663 "params": { 00:05:20.663 "enable_ktls": false, 00:05:20.663 "enable_placement_id": 0, 00:05:20.663 "enable_quickack": false, 00:05:20.663 "enable_recv_pipe": true, 00:05:20.663 "enable_zerocopy_send_client": false, 00:05:20.663 "enable_zerocopy_send_server": true, 00:05:20.663 "impl_name": "posix", 00:05:20.663 "recv_buf_size": 2097152, 00:05:20.663 "send_buf_size": 2097152, 00:05:20.663 "tls_version": 0, 00:05:20.663 "zerocopy_threshold": 0 00:05:20.663 } 00:05:20.663 } 00:05:20.663 ] 00:05:20.663 }, 00:05:20.663 { 00:05:20.663 "subsystem": "vmd", 00:05:20.663 "config": [] 00:05:20.663 }, 00:05:20.663 { 00:05:20.663 "subsystem": "accel", 00:05:20.663 "config": [ 00:05:20.663 { 00:05:20.663 "method": "accel_set_options", 00:05:20.663 "params": { 00:05:20.663 "buf_count": 2048, 00:05:20.663 "large_cache_size": 16, 00:05:20.663 "sequence_count": 2048, 00:05:20.663 "small_cache_size": 128, 00:05:20.663 "task_count": 2048 00:05:20.663 } 00:05:20.663 } 00:05:20.663 ] 00:05:20.663 }, 00:05:20.663 { 00:05:20.663 "subsystem": "bdev", 00:05:20.663 "config": [ 00:05:20.663 { 00:05:20.663 "method": "bdev_set_options", 00:05:20.663 "params": { 00:05:20.663 "bdev_auto_examine": true, 00:05:20.663 "bdev_io_cache_size": 256, 00:05:20.663 "bdev_io_pool_size": 65535, 00:05:20.663 "iobuf_large_cache_size": 16, 00:05:20.663 "iobuf_small_cache_size": 128 00:05:20.663 } 00:05:20.663 }, 00:05:20.663 { 00:05:20.663 "method": "bdev_raid_set_options", 00:05:20.663 "params": { 00:05:20.663 "process_max_bandwidth_mb_sec": 0, 00:05:20.663 "process_window_size_kb": 1024 00:05:20.663 } 00:05:20.663 }, 00:05:20.663 { 00:05:20.663 "method": "bdev_iscsi_set_options", 00:05:20.663 "params": { 00:05:20.663 "timeout_sec": 30 00:05:20.663 } 00:05:20.663 }, 00:05:20.663 { 00:05:20.663 "method": "bdev_nvme_set_options", 00:05:20.663 "params": { 00:05:20.663 "action_on_timeout": "none", 00:05:20.663 "allow_accel_sequence": false, 00:05:20.663 "arbitration_burst": 0, 00:05:20.663 "bdev_retry_count": 3, 00:05:20.663 "ctrlr_loss_timeout_sec": 0, 00:05:20.663 "delay_cmd_submit": true, 00:05:20.663 "dhchap_dhgroups": [ 00:05:20.663 "null", 00:05:20.663 "ffdhe2048", 00:05:20.663 "ffdhe3072", 00:05:20.663 "ffdhe4096", 00:05:20.663 "ffdhe6144", 00:05:20.663 "ffdhe8192" 00:05:20.663 ], 00:05:20.663 "dhchap_digests": [ 00:05:20.663 "sha256", 00:05:20.663 "sha384", 00:05:20.663 "sha512" 00:05:20.663 ], 00:05:20.663 "disable_auto_failback": false, 00:05:20.663 "fast_io_fail_timeout_sec": 0, 00:05:20.663 "generate_uuids": false, 00:05:20.663 "high_priority_weight": 0, 00:05:20.663 "io_path_stat": false, 00:05:20.663 "io_queue_requests": 0, 00:05:20.663 "keep_alive_timeout_ms": 10000, 00:05:20.663 "low_priority_weight": 0, 00:05:20.663 "medium_priority_weight": 0, 00:05:20.663 "nvme_adminq_poll_period_us": 10000, 00:05:20.663 "nvme_error_stat": false, 00:05:20.663 "nvme_ioq_poll_period_us": 0, 00:05:20.663 "rdma_cm_event_timeout_ms": 0, 00:05:20.664 "rdma_max_cq_size": 0, 00:05:20.664 "rdma_srq_size": 0, 00:05:20.664 "reconnect_delay_sec": 0, 00:05:20.664 "timeout_admin_us": 0, 00:05:20.664 "timeout_us": 0, 00:05:20.664 "transport_ack_timeout": 0, 00:05:20.664 "transport_retry_count": 4, 00:05:20.664 "transport_tos": 0 00:05:20.664 } 00:05:20.664 }, 00:05:20.664 { 00:05:20.664 "method": "bdev_nvme_set_hotplug", 00:05:20.664 "params": { 00:05:20.664 "enable": false, 00:05:20.664 "period_us": 100000 00:05:20.664 } 00:05:20.664 }, 00:05:20.664 { 00:05:20.664 "method": "bdev_wait_for_examine" 00:05:20.664 } 00:05:20.664 ] 00:05:20.664 }, 00:05:20.664 { 00:05:20.664 "subsystem": "scsi", 00:05:20.664 "config": null 00:05:20.664 }, 00:05:20.664 { 00:05:20.664 "subsystem": "scheduler", 00:05:20.664 "config": [ 00:05:20.664 { 00:05:20.664 "method": "framework_set_scheduler", 00:05:20.664 "params": { 00:05:20.664 "name": "static" 00:05:20.664 } 00:05:20.664 } 00:05:20.664 ] 00:05:20.664 }, 00:05:20.664 { 00:05:20.664 "subsystem": "vhost_scsi", 00:05:20.664 "config": [] 00:05:20.664 }, 00:05:20.664 { 00:05:20.664 "subsystem": "vhost_blk", 00:05:20.664 "config": [] 00:05:20.664 }, 00:05:20.664 { 00:05:20.664 "subsystem": "ublk", 00:05:20.664 "config": [] 00:05:20.664 }, 00:05:20.664 { 00:05:20.664 "subsystem": "nbd", 00:05:20.664 "config": [] 00:05:20.664 }, 00:05:20.664 { 00:05:20.664 "subsystem": "nvmf", 00:05:20.664 "config": [ 00:05:20.664 { 00:05:20.664 "method": "nvmf_set_config", 00:05:20.664 "params": { 00:05:20.664 "admin_cmd_passthru": { 00:05:20.664 "identify_ctrlr": false 00:05:20.664 }, 00:05:20.664 "dhchap_dhgroups": [ 00:05:20.664 "null", 00:05:20.664 "ffdhe2048", 00:05:20.664 "ffdhe3072", 00:05:20.664 "ffdhe4096", 00:05:20.664 "ffdhe6144", 00:05:20.664 "ffdhe8192" 00:05:20.664 ], 00:05:20.664 "dhchap_digests": [ 00:05:20.664 "sha256", 00:05:20.664 "sha384", 00:05:20.664 "sha512" 00:05:20.664 ], 00:05:20.664 "discovery_filter": "match_any" 00:05:20.664 } 00:05:20.664 }, 00:05:20.664 { 00:05:20.664 "method": "nvmf_set_max_subsystems", 00:05:20.664 "params": { 00:05:20.664 "max_subsystems": 1024 00:05:20.664 } 00:05:20.664 }, 00:05:20.664 { 00:05:20.664 "method": "nvmf_set_crdt", 00:05:20.664 "params": { 00:05:20.664 "crdt1": 0, 00:05:20.664 "crdt2": 0, 00:05:20.664 "crdt3": 0 00:05:20.664 } 00:05:20.664 }, 00:05:20.664 { 00:05:20.664 "method": "nvmf_create_transport", 00:05:20.664 "params": { 00:05:20.664 "abort_timeout_sec": 1, 00:05:20.664 "ack_timeout": 0, 00:05:20.664 "buf_cache_size": 4294967295, 00:05:20.664 "c2h_success": true, 00:05:20.664 "data_wr_pool_size": 0, 00:05:20.664 "dif_insert_or_strip": false, 00:05:20.664 "in_capsule_data_size": 4096, 00:05:20.664 "io_unit_size": 131072, 00:05:20.664 "max_aq_depth": 128, 00:05:20.664 "max_io_qpairs_per_ctrlr": 127, 00:05:20.664 "max_io_size": 131072, 00:05:20.664 "max_queue_depth": 128, 00:05:20.664 "num_shared_buffers": 511, 00:05:20.664 "sock_priority": 0, 00:05:20.664 "trtype": "TCP", 00:05:20.664 "zcopy": false 00:05:20.664 } 00:05:20.664 } 00:05:20.664 ] 00:05:20.664 }, 00:05:20.664 { 00:05:20.664 "subsystem": "iscsi", 00:05:20.664 "config": [ 00:05:20.664 { 00:05:20.664 "method": "iscsi_set_options", 00:05:20.664 "params": { 00:05:20.664 "allow_duplicated_isid": false, 00:05:20.664 "chap_group": 0, 00:05:20.664 "data_out_pool_size": 2048, 00:05:20.664 "default_time2retain": 20, 00:05:20.664 "default_time2wait": 2, 00:05:20.664 "disable_chap": false, 00:05:20.664 "error_recovery_level": 0, 00:05:20.664 "first_burst_length": 8192, 00:05:20.664 "immediate_data": true, 00:05:20.664 "immediate_data_pool_size": 16384, 00:05:20.664 "max_connections_per_session": 2, 00:05:20.664 "max_large_datain_per_connection": 64, 00:05:20.664 "max_queue_depth": 64, 00:05:20.664 "max_r2t_per_connection": 4, 00:05:20.664 "max_sessions": 128, 00:05:20.664 "mutual_chap": false, 00:05:20.664 "node_base": "iqn.2016-06.io.spdk", 00:05:20.664 "nop_in_interval": 30, 00:05:20.664 "nop_timeout": 60, 00:05:20.664 "pdu_pool_size": 36864, 00:05:20.664 "require_chap": false 00:05:20.664 } 00:05:20.664 } 00:05:20.664 ] 00:05:20.664 } 00:05:20.664 ] 00:05:20.664 } 00:05:20.664 14:09:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:20.664 14:09:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58954 00:05:20.664 14:09:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 58954 ']' 00:05:20.664 14:09:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 58954 00:05:20.664 14:09:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:05:20.664 14:09:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:20.664 14:09:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58954 00:05:20.664 14:09:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:20.664 killing process with pid 58954 00:05:20.664 14:09:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:20.664 14:09:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58954' 00:05:20.664 14:09:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 58954 00:05:20.664 14:09:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 58954 00:05:22.048 14:09:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=59000 00:05:22.048 14:09:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:22.048 14:09:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:27.306 14:09:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 59000 00:05:27.306 14:09:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 59000 ']' 00:05:27.306 14:09:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 59000 00:05:27.306 14:09:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:05:27.306 14:09:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:27.306 14:09:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59000 00:05:27.306 14:09:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:27.306 killing process with pid 59000 00:05:27.306 14:09:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:27.307 14:09:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59000' 00:05:27.307 14:09:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 59000 00:05:27.307 14:09:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 59000 00:05:28.307 14:09:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:28.307 14:09:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:28.307 00:05:28.307 real 0m8.770s 00:05:28.307 user 0m8.356s 00:05:28.307 sys 0m0.604s 00:05:28.307 14:09:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:28.307 14:09:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:28.307 ************************************ 00:05:28.307 END TEST skip_rpc_with_json 00:05:28.307 ************************************ 00:05:28.307 14:09:55 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:28.307 14:09:55 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:28.307 14:09:55 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:28.307 14:09:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.307 ************************************ 00:05:28.307 START TEST skip_rpc_with_delay 00:05:28.307 ************************************ 00:05:28.307 14:09:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:05:28.307 14:09:55 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:28.307 14:09:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:28.307 14:09:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:28.307 14:09:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:28.307 14:09:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:28.307 14:09:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:28.307 14:09:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:28.307 14:09:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:28.307 14:09:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:28.307 14:09:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:28.307 14:09:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:28.307 14:09:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:28.307 [2024-11-06 14:09:55.932494] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:28.565 14:09:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:28.565 14:09:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:28.565 14:09:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:28.565 14:09:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:28.565 00:05:28.565 real 0m0.118s 00:05:28.565 user 0m0.051s 00:05:28.565 sys 0m0.065s 00:05:28.565 14:09:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:28.565 14:09:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:28.565 ************************************ 00:05:28.565 END TEST skip_rpc_with_delay 00:05:28.565 ************************************ 00:05:28.565 14:09:56 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:28.565 14:09:56 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:28.565 14:09:56 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:28.565 14:09:56 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:28.565 14:09:56 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:28.565 14:09:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.565 ************************************ 00:05:28.565 START TEST exit_on_failed_rpc_init 00:05:28.565 ************************************ 00:05:28.565 14:09:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:05:28.565 14:09:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=59123 00:05:28.565 14:09:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 59123 00:05:28.565 14:09:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 59123 ']' 00:05:28.565 14:09:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.565 14:09:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:28.565 14:09:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:28.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.565 14:09:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.565 14:09:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:28.565 14:09:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:28.565 [2024-11-06 14:09:56.105369] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:05:28.565 [2024-11-06 14:09:56.105506] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59123 ] 00:05:28.822 [2024-11-06 14:09:56.262606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.822 [2024-11-06 14:09:56.364287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.388 14:09:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:29.388 14:09:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:05:29.388 14:09:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:29.388 14:09:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:29.388 14:09:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:29.388 14:09:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:29.388 14:09:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:29.388 14:09:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:29.388 14:09:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:29.388 14:09:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:29.388 14:09:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:29.388 14:09:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:29.388 14:09:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:29.388 14:09:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:29.388 14:09:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:29.648 [2024-11-06 14:09:57.038808] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:05:29.648 [2024-11-06 14:09:57.038921] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59153 ] 00:05:29.648 [2024-11-06 14:09:57.190435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.648 [2024-11-06 14:09:57.276158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:29.648 [2024-11-06 14:09:57.276239] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:29.648 [2024-11-06 14:09:57.276249] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:29.648 [2024-11-06 14:09:57.276258] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:29.905 14:09:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:29.905 14:09:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:29.905 14:09:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:29.905 14:09:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:29.905 14:09:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:29.905 14:09:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:29.905 14:09:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:29.905 14:09:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 59123 00:05:29.905 14:09:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 59123 ']' 00:05:29.905 14:09:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 59123 00:05:29.905 14:09:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:05:29.905 14:09:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:29.905 14:09:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59123 00:05:29.905 14:09:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:29.905 14:09:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:29.905 killing process with pid 59123 00:05:29.905 14:09:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59123' 00:05:29.905 14:09:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 59123 00:05:29.905 14:09:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 59123 00:05:31.803 00:05:31.803 real 0m2.954s 00:05:31.803 user 0m3.205s 00:05:31.803 sys 0m0.430s 00:05:31.803 14:09:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:31.803 14:09:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:31.803 ************************************ 00:05:31.803 END TEST exit_on_failed_rpc_init 00:05:31.803 ************************************ 00:05:31.803 14:09:59 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:31.803 00:05:31.803 real 0m18.393s 00:05:31.803 user 0m17.594s 00:05:31.803 sys 0m1.552s 00:05:31.803 14:09:59 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:31.803 ************************************ 00:05:31.803 END TEST skip_rpc 00:05:31.803 ************************************ 00:05:31.803 14:09:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.803 14:09:59 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:31.803 14:09:59 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:31.803 14:09:59 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:31.803 14:09:59 -- common/autotest_common.sh@10 -- # set +x 00:05:31.803 ************************************ 00:05:31.803 START TEST rpc_client 00:05:31.803 ************************************ 00:05:31.803 14:09:59 rpc_client -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:31.803 * Looking for test storage... 00:05:31.803 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:31.803 14:09:59 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:31.803 14:09:59 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:31.803 14:09:59 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:05:31.803 14:09:59 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:31.803 14:09:59 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:31.803 14:09:59 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:31.803 14:09:59 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:31.803 14:09:59 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:31.803 14:09:59 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:31.803 14:09:59 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:31.803 14:09:59 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:31.803 14:09:59 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:31.803 14:09:59 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:31.803 14:09:59 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:31.803 14:09:59 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:31.803 14:09:59 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:31.803 14:09:59 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:31.803 14:09:59 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:31.803 14:09:59 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:31.803 14:09:59 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:31.803 14:09:59 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:31.803 14:09:59 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:31.803 14:09:59 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:31.803 14:09:59 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:31.803 14:09:59 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:31.803 14:09:59 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:31.803 14:09:59 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:31.803 14:09:59 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:31.803 14:09:59 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:31.803 14:09:59 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:31.803 14:09:59 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:31.803 14:09:59 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:31.803 14:09:59 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:31.803 14:09:59 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:31.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.803 --rc genhtml_branch_coverage=1 00:05:31.803 --rc genhtml_function_coverage=1 00:05:31.803 --rc genhtml_legend=1 00:05:31.803 --rc geninfo_all_blocks=1 00:05:31.803 --rc geninfo_unexecuted_blocks=1 00:05:31.803 00:05:31.803 ' 00:05:31.803 14:09:59 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:31.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.803 --rc genhtml_branch_coverage=1 00:05:31.803 --rc genhtml_function_coverage=1 00:05:31.803 --rc genhtml_legend=1 00:05:31.803 --rc geninfo_all_blocks=1 00:05:31.803 --rc geninfo_unexecuted_blocks=1 00:05:31.803 00:05:31.803 ' 00:05:31.803 14:09:59 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:31.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.803 --rc genhtml_branch_coverage=1 00:05:31.803 --rc genhtml_function_coverage=1 00:05:31.803 --rc genhtml_legend=1 00:05:31.803 --rc geninfo_all_blocks=1 00:05:31.803 --rc geninfo_unexecuted_blocks=1 00:05:31.803 00:05:31.803 ' 00:05:31.803 14:09:59 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:31.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.803 --rc genhtml_branch_coverage=1 00:05:31.803 --rc genhtml_function_coverage=1 00:05:31.803 --rc genhtml_legend=1 00:05:31.803 --rc geninfo_all_blocks=1 00:05:31.803 --rc geninfo_unexecuted_blocks=1 00:05:31.803 00:05:31.803 ' 00:05:31.803 14:09:59 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:31.803 OK 00:05:31.803 14:09:59 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:31.803 00:05:31.803 real 0m0.185s 00:05:31.803 user 0m0.097s 00:05:31.803 sys 0m0.094s 00:05:31.803 14:09:59 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:31.803 ************************************ 00:05:31.803 END TEST rpc_client 00:05:31.803 ************************************ 00:05:31.803 14:09:59 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:31.803 14:09:59 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:31.803 14:09:59 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:31.803 14:09:59 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:31.803 14:09:59 -- common/autotest_common.sh@10 -- # set +x 00:05:31.803 ************************************ 00:05:31.803 START TEST json_config 00:05:31.803 ************************************ 00:05:31.803 14:09:59 json_config -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:31.803 14:09:59 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:31.803 14:09:59 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:05:31.803 14:09:59 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:31.803 14:09:59 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:31.803 14:09:59 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:31.803 14:09:59 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:31.803 14:09:59 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:31.803 14:09:59 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:31.803 14:09:59 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:31.803 14:09:59 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:31.803 14:09:59 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:31.803 14:09:59 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:31.803 14:09:59 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:31.803 14:09:59 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:31.803 14:09:59 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:31.803 14:09:59 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:31.803 14:09:59 json_config -- scripts/common.sh@345 -- # : 1 00:05:31.803 14:09:59 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:31.803 14:09:59 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:31.803 14:09:59 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:31.803 14:09:59 json_config -- scripts/common.sh@353 -- # local d=1 00:05:31.803 14:09:59 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:31.803 14:09:59 json_config -- scripts/common.sh@355 -- # echo 1 00:05:31.803 14:09:59 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:31.803 14:09:59 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:31.803 14:09:59 json_config -- scripts/common.sh@353 -- # local d=2 00:05:31.803 14:09:59 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:31.803 14:09:59 json_config -- scripts/common.sh@355 -- # echo 2 00:05:31.803 14:09:59 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:31.803 14:09:59 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:31.803 14:09:59 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:31.804 14:09:59 json_config -- scripts/common.sh@368 -- # return 0 00:05:31.804 14:09:59 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:31.804 14:09:59 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:31.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.804 --rc genhtml_branch_coverage=1 00:05:31.804 --rc genhtml_function_coverage=1 00:05:31.804 --rc genhtml_legend=1 00:05:31.804 --rc geninfo_all_blocks=1 00:05:31.804 --rc geninfo_unexecuted_blocks=1 00:05:31.804 00:05:31.804 ' 00:05:31.804 14:09:59 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:31.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.804 --rc genhtml_branch_coverage=1 00:05:31.804 --rc genhtml_function_coverage=1 00:05:31.804 --rc genhtml_legend=1 00:05:31.804 --rc geninfo_all_blocks=1 00:05:31.804 --rc geninfo_unexecuted_blocks=1 00:05:31.804 00:05:31.804 ' 00:05:31.804 14:09:59 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:31.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.804 --rc genhtml_branch_coverage=1 00:05:31.804 --rc genhtml_function_coverage=1 00:05:31.804 --rc genhtml_legend=1 00:05:31.804 --rc geninfo_all_blocks=1 00:05:31.804 --rc geninfo_unexecuted_blocks=1 00:05:31.804 00:05:31.804 ' 00:05:31.804 14:09:59 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:31.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.804 --rc genhtml_branch_coverage=1 00:05:31.804 --rc genhtml_function_coverage=1 00:05:31.804 --rc genhtml_legend=1 00:05:31.804 --rc geninfo_all_blocks=1 00:05:31.804 --rc geninfo_unexecuted_blocks=1 00:05:31.804 00:05:31.804 ' 00:05:31.804 14:09:59 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:31.804 14:09:59 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:31.804 14:09:59 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:31.804 14:09:59 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:31.804 14:09:59 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:31.804 14:09:59 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:31.804 14:09:59 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:31.804 14:09:59 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:31.804 14:09:59 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:31.804 14:09:59 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:31.804 14:09:59 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:31.804 14:09:59 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:31.804 14:09:59 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:05:31.804 14:09:59 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:05:31.804 14:09:59 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:31.804 14:09:59 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:31.804 14:09:59 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:31.804 14:09:59 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:31.804 14:09:59 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:31.804 14:09:59 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:31.804 14:09:59 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:31.804 14:09:59 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:31.804 14:09:59 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:31.804 14:09:59 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.804 14:09:59 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.804 14:09:59 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.804 14:09:59 json_config -- paths/export.sh@5 -- # export PATH 00:05:31.804 14:09:59 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.804 14:09:59 json_config -- nvmf/common.sh@51 -- # : 0 00:05:31.804 14:09:59 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:31.804 14:09:59 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:31.804 14:09:59 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:31.804 14:09:59 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:31.804 14:09:59 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:31.804 14:09:59 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:31.804 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:31.804 14:09:59 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:31.804 14:09:59 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:31.804 14:09:59 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:31.804 14:09:59 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:31.804 14:09:59 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:31.804 14:09:59 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:31.804 14:09:59 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:31.804 14:09:59 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:31.804 14:09:59 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:31.804 14:09:59 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:31.804 14:09:59 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:31.804 14:09:59 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:31.804 14:09:59 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:31.804 14:09:59 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:31.804 14:09:59 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:31.804 14:09:59 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:31.804 14:09:59 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:31.804 14:09:59 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:31.804 14:09:59 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:31.804 INFO: JSON configuration test init 00:05:31.804 14:09:59 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:31.804 14:09:59 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:31.804 14:09:59 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:31.804 14:09:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:31.804 14:09:59 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:31.804 14:09:59 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:31.804 14:09:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:31.804 14:09:59 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:31.804 14:09:59 json_config -- json_config/common.sh@9 -- # local app=target 00:05:31.804 14:09:59 json_config -- json_config/common.sh@10 -- # shift 00:05:31.804 14:09:59 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:31.804 14:09:59 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:31.804 14:09:59 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:31.804 14:09:59 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:31.804 14:09:59 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:31.804 14:09:59 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59306 00:05:31.804 14:09:59 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:31.804 Waiting for target to run... 00:05:31.804 14:09:59 json_config -- json_config/common.sh@25 -- # waitforlisten 59306 /var/tmp/spdk_tgt.sock 00:05:31.804 14:09:59 json_config -- common/autotest_common.sh@833 -- # '[' -z 59306 ']' 00:05:31.804 14:09:59 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:31.804 14:09:59 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:31.804 14:09:59 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:31.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:31.804 14:09:59 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:31.804 14:09:59 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:31.804 14:09:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.062 [2024-11-06 14:09:59.503553] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:05:32.062 [2024-11-06 14:09:59.504004] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59306 ] 00:05:32.320 [2024-11-06 14:09:59.828334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.320 [2024-11-06 14:09:59.919280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.887 00:05:32.887 14:10:00 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:32.887 14:10:00 json_config -- common/autotest_common.sh@866 -- # return 0 00:05:32.887 14:10:00 json_config -- json_config/common.sh@26 -- # echo '' 00:05:32.887 14:10:00 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:32.887 14:10:00 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:32.887 14:10:00 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:32.887 14:10:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.887 14:10:00 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:32.888 14:10:00 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:32.888 14:10:00 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:32.888 14:10:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.888 14:10:00 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:32.888 14:10:00 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:32.888 14:10:00 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:33.820 14:10:01 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:33.820 14:10:01 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:33.820 14:10:01 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:33.820 14:10:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:33.820 14:10:01 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:33.820 14:10:01 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:33.820 14:10:01 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:33.820 14:10:01 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:33.821 14:10:01 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:33.821 14:10:01 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:33.821 14:10:01 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:33.821 14:10:01 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:33.821 14:10:01 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:33.821 14:10:01 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:33.821 14:10:01 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:33.821 14:10:01 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:33.821 14:10:01 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:33.821 14:10:01 json_config -- json_config/json_config.sh@54 -- # sort 00:05:33.821 14:10:01 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:33.821 14:10:01 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:33.821 14:10:01 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:33.821 14:10:01 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:33.821 14:10:01 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:33.821 14:10:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:33.821 14:10:01 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:33.821 14:10:01 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:33.821 14:10:01 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:33.821 14:10:01 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:33.821 14:10:01 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:34.078 14:10:01 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:34.078 14:10:01 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:34.078 14:10:01 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:34.078 14:10:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.078 14:10:01 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:34.078 14:10:01 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:34.078 14:10:01 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:34.078 14:10:01 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:34.078 14:10:01 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:34.078 MallocForNvmf0 00:05:34.078 14:10:01 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:34.078 14:10:01 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:34.336 MallocForNvmf1 00:05:34.336 14:10:01 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:34.336 14:10:01 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:34.593 [2024-11-06 14:10:02.031860] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:34.593 14:10:02 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:34.593 14:10:02 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:34.849 14:10:02 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:34.849 14:10:02 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:34.849 14:10:02 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:34.849 14:10:02 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:35.108 14:10:02 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:35.108 14:10:02 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:35.365 [2024-11-06 14:10:02.848327] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:35.365 14:10:02 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:35.365 14:10:02 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:35.365 14:10:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:35.365 14:10:02 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:35.365 14:10:02 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:35.365 14:10:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:35.365 14:10:02 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:35.365 14:10:02 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:35.365 14:10:02 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:35.623 MallocBdevForConfigChangeCheck 00:05:35.623 14:10:03 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:35.623 14:10:03 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:35.623 14:10:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:35.623 14:10:03 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:35.623 14:10:03 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:35.880 INFO: shutting down applications... 00:05:35.880 14:10:03 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:35.880 14:10:03 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:35.880 14:10:03 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:35.880 14:10:03 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:35.880 14:10:03 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:36.445 Calling clear_iscsi_subsystem 00:05:36.445 Calling clear_nvmf_subsystem 00:05:36.445 Calling clear_nbd_subsystem 00:05:36.445 Calling clear_ublk_subsystem 00:05:36.445 Calling clear_vhost_blk_subsystem 00:05:36.445 Calling clear_vhost_scsi_subsystem 00:05:36.445 Calling clear_bdev_subsystem 00:05:36.445 14:10:03 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:36.445 14:10:03 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:36.445 14:10:03 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:36.445 14:10:03 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:36.445 14:10:03 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:36.445 14:10:03 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:36.703 14:10:04 json_config -- json_config/json_config.sh@352 -- # break 00:05:36.703 14:10:04 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:36.703 14:10:04 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:36.703 14:10:04 json_config -- json_config/common.sh@31 -- # local app=target 00:05:36.703 14:10:04 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:36.703 14:10:04 json_config -- json_config/common.sh@35 -- # [[ -n 59306 ]] 00:05:36.703 14:10:04 json_config -- json_config/common.sh@38 -- # kill -SIGINT 59306 00:05:36.703 14:10:04 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:36.703 14:10:04 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:36.703 14:10:04 json_config -- json_config/common.sh@41 -- # kill -0 59306 00:05:36.703 14:10:04 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:37.268 14:10:04 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:37.268 14:10:04 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:37.268 14:10:04 json_config -- json_config/common.sh@41 -- # kill -0 59306 00:05:37.268 14:10:04 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:37.268 14:10:04 json_config -- json_config/common.sh@43 -- # break 00:05:37.268 14:10:04 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:37.268 SPDK target shutdown done 00:05:37.268 INFO: relaunching applications... 00:05:37.268 14:10:04 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:37.268 14:10:04 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:37.268 14:10:04 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:37.268 14:10:04 json_config -- json_config/common.sh@9 -- # local app=target 00:05:37.268 14:10:04 json_config -- json_config/common.sh@10 -- # shift 00:05:37.268 14:10:04 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:37.268 14:10:04 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:37.268 14:10:04 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:37.268 14:10:04 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:37.268 14:10:04 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:37.268 14:10:04 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59580 00:05:37.268 Waiting for target to run... 00:05:37.268 14:10:04 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:37.268 14:10:04 json_config -- json_config/common.sh@25 -- # waitforlisten 59580 /var/tmp/spdk_tgt.sock 00:05:37.268 14:10:04 json_config -- common/autotest_common.sh@833 -- # '[' -z 59580 ']' 00:05:37.268 14:10:04 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:37.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:37.268 14:10:04 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:37.268 14:10:04 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:37.268 14:10:04 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:37.268 14:10:04 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:37.268 14:10:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:37.268 [2024-11-06 14:10:04.794569] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:05:37.268 [2024-11-06 14:10:04.795107] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59580 ] 00:05:37.526 [2024-11-06 14:10:05.121670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.785 [2024-11-06 14:10:05.197858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.350 [2024-11-06 14:10:05.851009] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:38.350 [2024-11-06 14:10:05.883111] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:38.350 14:10:05 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:38.350 14:10:05 json_config -- common/autotest_common.sh@866 -- # return 0 00:05:38.350 00:05:38.350 14:10:05 json_config -- json_config/common.sh@26 -- # echo '' 00:05:38.350 14:10:05 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:38.350 INFO: Checking if target configuration is the same... 00:05:38.350 14:10:05 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:38.350 14:10:05 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:38.350 14:10:05 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:38.350 14:10:05 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:38.350 + '[' 2 -ne 2 ']' 00:05:38.350 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:38.350 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:38.350 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:38.350 +++ basename /dev/fd/62 00:05:38.350 ++ mktemp /tmp/62.XXX 00:05:38.350 + tmp_file_1=/tmp/62.5G1 00:05:38.350 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:38.350 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:38.350 + tmp_file_2=/tmp/spdk_tgt_config.json.78f 00:05:38.350 + ret=0 00:05:38.350 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:38.915 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:38.915 + diff -u /tmp/62.5G1 /tmp/spdk_tgt_config.json.78f 00:05:38.915 + echo 'INFO: JSON config files are the same' 00:05:38.915 INFO: JSON config files are the same 00:05:38.915 + rm /tmp/62.5G1 /tmp/spdk_tgt_config.json.78f 00:05:38.915 + exit 0 00:05:38.915 14:10:06 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:38.915 INFO: changing configuration and checking if this can be detected... 00:05:38.915 14:10:06 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:38.915 14:10:06 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:38.915 14:10:06 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:39.172 14:10:06 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:39.172 14:10:06 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:39.172 14:10:06 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:39.172 + '[' 2 -ne 2 ']' 00:05:39.172 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:39.172 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:39.172 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:39.172 +++ basename /dev/fd/62 00:05:39.172 ++ mktemp /tmp/62.XXX 00:05:39.172 + tmp_file_1=/tmp/62.tRt 00:05:39.172 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:39.172 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:39.172 + tmp_file_2=/tmp/spdk_tgt_config.json.sVO 00:05:39.172 + ret=0 00:05:39.172 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:39.430 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:39.430 + diff -u /tmp/62.tRt /tmp/spdk_tgt_config.json.sVO 00:05:39.430 + ret=1 00:05:39.430 + echo '=== Start of file: /tmp/62.tRt ===' 00:05:39.430 + cat /tmp/62.tRt 00:05:39.430 + echo '=== End of file: /tmp/62.tRt ===' 00:05:39.430 + echo '' 00:05:39.430 + echo '=== Start of file: /tmp/spdk_tgt_config.json.sVO ===' 00:05:39.430 + cat /tmp/spdk_tgt_config.json.sVO 00:05:39.430 + echo '=== End of file: /tmp/spdk_tgt_config.json.sVO ===' 00:05:39.430 + echo '' 00:05:39.430 + rm /tmp/62.tRt /tmp/spdk_tgt_config.json.sVO 00:05:39.430 + exit 1 00:05:39.430 INFO: configuration change detected. 00:05:39.430 14:10:06 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:39.430 14:10:06 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:39.430 14:10:06 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:39.430 14:10:06 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:39.430 14:10:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:39.430 14:10:06 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:39.430 14:10:06 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:39.430 14:10:06 json_config -- json_config/json_config.sh@324 -- # [[ -n 59580 ]] 00:05:39.430 14:10:06 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:39.430 14:10:06 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:39.430 14:10:06 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:39.430 14:10:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:39.430 14:10:07 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:39.430 14:10:07 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:39.430 14:10:07 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:39.430 14:10:07 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:39.430 14:10:07 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:39.430 14:10:07 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:39.430 14:10:07 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:39.430 14:10:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:39.430 14:10:07 json_config -- json_config/json_config.sh@330 -- # killprocess 59580 00:05:39.430 14:10:07 json_config -- common/autotest_common.sh@952 -- # '[' -z 59580 ']' 00:05:39.430 14:10:07 json_config -- common/autotest_common.sh@956 -- # kill -0 59580 00:05:39.430 14:10:07 json_config -- common/autotest_common.sh@957 -- # uname 00:05:39.430 14:10:07 json_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:39.430 14:10:07 json_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59580 00:05:39.430 killing process with pid 59580 00:05:39.430 14:10:07 json_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:39.430 14:10:07 json_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:39.430 14:10:07 json_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59580' 00:05:39.430 14:10:07 json_config -- common/autotest_common.sh@971 -- # kill 59580 00:05:39.430 14:10:07 json_config -- common/autotest_common.sh@976 -- # wait 59580 00:05:39.997 14:10:07 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:39.998 14:10:07 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:39.998 14:10:07 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:39.998 14:10:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:39.998 14:10:07 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:39.998 INFO: Success 00:05:39.998 14:10:07 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:39.998 ************************************ 00:05:39.998 END TEST json_config 00:05:39.998 ************************************ 00:05:39.998 00:05:39.998 real 0m8.334s 00:05:39.998 user 0m11.339s 00:05:39.998 sys 0m1.468s 00:05:39.998 14:10:07 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:39.998 14:10:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:39.998 14:10:07 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:39.998 14:10:07 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:39.998 14:10:07 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:39.998 14:10:07 -- common/autotest_common.sh@10 -- # set +x 00:05:40.256 ************************************ 00:05:40.256 START TEST json_config_extra_key 00:05:40.256 ************************************ 00:05:40.256 14:10:07 json_config_extra_key -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:40.256 14:10:07 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:40.256 14:10:07 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:05:40.256 14:10:07 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:40.256 14:10:07 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:40.256 14:10:07 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:40.256 14:10:07 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:40.256 14:10:07 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:40.256 14:10:07 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:40.256 14:10:07 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:40.256 14:10:07 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:40.256 14:10:07 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:40.256 14:10:07 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:40.256 14:10:07 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:40.256 14:10:07 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:40.256 14:10:07 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:40.256 14:10:07 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:40.256 14:10:07 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:40.256 14:10:07 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:40.256 14:10:07 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:40.256 14:10:07 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:40.256 14:10:07 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:40.256 14:10:07 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:40.256 14:10:07 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:40.256 14:10:07 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:40.256 14:10:07 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:40.256 14:10:07 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:40.256 14:10:07 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:40.256 14:10:07 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:40.256 14:10:07 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:40.256 14:10:07 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:40.256 14:10:07 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:40.256 14:10:07 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:40.256 14:10:07 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:40.256 14:10:07 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:40.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.256 --rc genhtml_branch_coverage=1 00:05:40.256 --rc genhtml_function_coverage=1 00:05:40.256 --rc genhtml_legend=1 00:05:40.256 --rc geninfo_all_blocks=1 00:05:40.256 --rc geninfo_unexecuted_blocks=1 00:05:40.256 00:05:40.256 ' 00:05:40.256 14:10:07 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:40.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.256 --rc genhtml_branch_coverage=1 00:05:40.256 --rc genhtml_function_coverage=1 00:05:40.256 --rc genhtml_legend=1 00:05:40.256 --rc geninfo_all_blocks=1 00:05:40.256 --rc geninfo_unexecuted_blocks=1 00:05:40.256 00:05:40.256 ' 00:05:40.256 14:10:07 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:40.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.256 --rc genhtml_branch_coverage=1 00:05:40.256 --rc genhtml_function_coverage=1 00:05:40.256 --rc genhtml_legend=1 00:05:40.256 --rc geninfo_all_blocks=1 00:05:40.256 --rc geninfo_unexecuted_blocks=1 00:05:40.256 00:05:40.256 ' 00:05:40.256 14:10:07 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:40.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.256 --rc genhtml_branch_coverage=1 00:05:40.256 --rc genhtml_function_coverage=1 00:05:40.256 --rc genhtml_legend=1 00:05:40.256 --rc geninfo_all_blocks=1 00:05:40.256 --rc geninfo_unexecuted_blocks=1 00:05:40.256 00:05:40.256 ' 00:05:40.256 14:10:07 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:40.256 14:10:07 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:40.256 14:10:07 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:40.256 14:10:07 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:40.256 14:10:07 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:40.256 14:10:07 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:40.256 14:10:07 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:40.256 14:10:07 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:40.256 14:10:07 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:40.256 14:10:07 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:40.256 14:10:07 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:40.256 14:10:07 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:40.256 14:10:07 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:05:40.256 14:10:07 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:05:40.256 14:10:07 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:40.256 14:10:07 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:40.256 14:10:07 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:40.256 14:10:07 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:40.256 14:10:07 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:40.257 14:10:07 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:40.257 14:10:07 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:40.257 14:10:07 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:40.257 14:10:07 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:40.257 14:10:07 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.257 14:10:07 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.257 14:10:07 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.257 14:10:07 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:40.257 14:10:07 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.257 14:10:07 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:40.257 14:10:07 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:40.257 14:10:07 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:40.257 14:10:07 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:40.257 14:10:07 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:40.257 14:10:07 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:40.257 14:10:07 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:40.257 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:40.257 14:10:07 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:40.257 14:10:07 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:40.257 14:10:07 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:40.257 14:10:07 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:40.257 14:10:07 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:40.257 14:10:07 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:40.257 14:10:07 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:40.257 14:10:07 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:40.257 14:10:07 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:40.257 14:10:07 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:40.257 14:10:07 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:40.257 14:10:07 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:40.257 14:10:07 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:40.257 INFO: launching applications... 00:05:40.257 14:10:07 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:40.257 14:10:07 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:40.257 14:10:07 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:40.257 14:10:07 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:40.257 14:10:07 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:40.257 14:10:07 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:40.257 14:10:07 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:40.257 14:10:07 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:40.257 14:10:07 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:40.257 14:10:07 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=59765 00:05:40.257 Waiting for target to run... 00:05:40.257 14:10:07 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:40.257 14:10:07 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 59765 /var/tmp/spdk_tgt.sock 00:05:40.257 14:10:07 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 59765 ']' 00:05:40.257 14:10:07 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:40.257 14:10:07 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:40.257 14:10:07 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:40.257 14:10:07 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:40.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:40.257 14:10:07 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:40.257 14:10:07 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:40.257 [2024-11-06 14:10:07.855439] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:05:40.257 [2024-11-06 14:10:07.855576] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59765 ] 00:05:40.832 [2024-11-06 14:10:08.179281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.832 [2024-11-06 14:10:08.272405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.397 14:10:08 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:41.397 14:10:08 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:05:41.397 00:05:41.397 14:10:08 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:41.397 INFO: shutting down applications... 00:05:41.397 14:10:08 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:41.397 14:10:08 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:41.397 14:10:08 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:41.397 14:10:08 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:41.397 14:10:08 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 59765 ]] 00:05:41.397 14:10:08 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 59765 00:05:41.397 14:10:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:41.397 14:10:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:41.397 14:10:08 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59765 00:05:41.397 14:10:08 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:41.962 14:10:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:41.962 14:10:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:41.962 14:10:09 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59765 00:05:41.962 14:10:09 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:42.219 14:10:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:42.219 14:10:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:42.219 14:10:09 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59765 00:05:42.219 14:10:09 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:42.783 14:10:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:42.783 14:10:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:42.783 14:10:10 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59765 00:05:42.783 14:10:10 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:43.349 14:10:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:43.349 14:10:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:43.349 14:10:10 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59765 00:05:43.349 14:10:10 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:43.349 14:10:10 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:43.349 14:10:10 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:43.349 SPDK target shutdown done 00:05:43.349 14:10:10 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:43.349 Success 00:05:43.349 14:10:10 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:43.349 00:05:43.349 real 0m3.178s 00:05:43.349 user 0m2.875s 00:05:43.349 sys 0m0.408s 00:05:43.349 14:10:10 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:43.349 14:10:10 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:43.349 ************************************ 00:05:43.349 END TEST json_config_extra_key 00:05:43.349 ************************************ 00:05:43.349 14:10:10 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:43.349 14:10:10 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:43.349 14:10:10 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:43.349 14:10:10 -- common/autotest_common.sh@10 -- # set +x 00:05:43.349 ************************************ 00:05:43.349 START TEST alias_rpc 00:05:43.349 ************************************ 00:05:43.349 14:10:10 alias_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:43.349 * Looking for test storage... 00:05:43.349 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:43.349 14:10:10 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:43.349 14:10:10 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:43.349 14:10:10 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:43.349 14:10:10 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:43.349 14:10:10 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:43.350 14:10:10 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:43.350 14:10:10 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:43.350 14:10:10 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:43.350 14:10:10 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:43.350 14:10:10 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:43.350 14:10:10 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:43.350 14:10:10 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:43.350 14:10:10 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:43.350 14:10:10 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:43.350 14:10:10 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:43.350 14:10:10 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:43.350 14:10:10 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:43.350 14:10:10 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:43.350 14:10:10 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:43.350 14:10:10 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:43.350 14:10:10 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:43.350 14:10:10 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:43.350 14:10:10 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:43.350 14:10:10 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:43.350 14:10:10 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:43.350 14:10:10 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:43.350 14:10:10 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:43.350 14:10:10 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:43.350 14:10:10 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:43.350 14:10:10 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:43.350 14:10:10 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:43.350 14:10:10 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:43.350 14:10:10 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:43.350 14:10:10 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:43.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.350 --rc genhtml_branch_coverage=1 00:05:43.350 --rc genhtml_function_coverage=1 00:05:43.350 --rc genhtml_legend=1 00:05:43.350 --rc geninfo_all_blocks=1 00:05:43.350 --rc geninfo_unexecuted_blocks=1 00:05:43.350 00:05:43.350 ' 00:05:43.350 14:10:10 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:43.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.350 --rc genhtml_branch_coverage=1 00:05:43.350 --rc genhtml_function_coverage=1 00:05:43.350 --rc genhtml_legend=1 00:05:43.350 --rc geninfo_all_blocks=1 00:05:43.350 --rc geninfo_unexecuted_blocks=1 00:05:43.350 00:05:43.350 ' 00:05:43.350 14:10:10 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:43.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.350 --rc genhtml_branch_coverage=1 00:05:43.350 --rc genhtml_function_coverage=1 00:05:43.350 --rc genhtml_legend=1 00:05:43.350 --rc geninfo_all_blocks=1 00:05:43.350 --rc geninfo_unexecuted_blocks=1 00:05:43.350 00:05:43.350 ' 00:05:43.350 14:10:10 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:43.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.350 --rc genhtml_branch_coverage=1 00:05:43.350 --rc genhtml_function_coverage=1 00:05:43.350 --rc genhtml_legend=1 00:05:43.350 --rc geninfo_all_blocks=1 00:05:43.350 --rc geninfo_unexecuted_blocks=1 00:05:43.350 00:05:43.350 ' 00:05:43.350 14:10:10 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:43.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.350 14:10:10 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59876 00:05:43.350 14:10:10 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59876 00:05:43.350 14:10:10 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 59876 ']' 00:05:43.350 14:10:10 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.350 14:10:10 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:43.350 14:10:10 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.350 14:10:10 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:43.350 14:10:10 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:43.350 14:10:10 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.608 [2024-11-06 14:10:11.062641] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:05:43.608 [2024-11-06 14:10:11.063208] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59876 ] 00:05:43.608 [2024-11-06 14:10:11.220684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.865 [2024-11-06 14:10:11.301336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.429 14:10:11 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:44.429 14:10:11 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:44.429 14:10:11 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:44.686 14:10:12 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59876 00:05:44.686 14:10:12 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 59876 ']' 00:05:44.686 14:10:12 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 59876 00:05:44.686 14:10:12 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:05:44.686 14:10:12 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:44.686 14:10:12 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59876 00:05:44.686 14:10:12 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:44.686 14:10:12 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:44.686 killing process with pid 59876 00:05:44.686 14:10:12 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59876' 00:05:44.686 14:10:12 alias_rpc -- common/autotest_common.sh@971 -- # kill 59876 00:05:44.686 14:10:12 alias_rpc -- common/autotest_common.sh@976 -- # wait 59876 00:05:46.057 00:05:46.057 real 0m2.521s 00:05:46.057 user 0m2.686s 00:05:46.057 sys 0m0.385s 00:05:46.057 ************************************ 00:05:46.057 END TEST alias_rpc 00:05:46.057 ************************************ 00:05:46.057 14:10:13 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:46.057 14:10:13 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.057 14:10:13 -- spdk/autotest.sh@163 -- # [[ 1 -eq 0 ]] 00:05:46.058 14:10:13 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:46.058 14:10:13 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:46.058 14:10:13 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:46.058 14:10:13 -- common/autotest_common.sh@10 -- # set +x 00:05:46.058 ************************************ 00:05:46.058 START TEST dpdk_mem_utility 00:05:46.058 ************************************ 00:05:46.058 14:10:13 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:46.058 * Looking for test storage... 00:05:46.058 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:46.058 14:10:13 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:46.058 14:10:13 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:05:46.058 14:10:13 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:46.058 14:10:13 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:46.058 14:10:13 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.058 14:10:13 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.058 14:10:13 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.058 14:10:13 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.058 14:10:13 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.058 14:10:13 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.058 14:10:13 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.058 14:10:13 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.058 14:10:13 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.058 14:10:13 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.058 14:10:13 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.058 14:10:13 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:46.058 14:10:13 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:46.058 14:10:13 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.058 14:10:13 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.058 14:10:13 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:46.058 14:10:13 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:46.058 14:10:13 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.058 14:10:13 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:46.058 14:10:13 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.058 14:10:13 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:46.058 14:10:13 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:46.058 14:10:13 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.058 14:10:13 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:46.058 14:10:13 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.058 14:10:13 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.058 14:10:13 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.058 14:10:13 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:46.058 14:10:13 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.058 14:10:13 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:46.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.058 --rc genhtml_branch_coverage=1 00:05:46.058 --rc genhtml_function_coverage=1 00:05:46.058 --rc genhtml_legend=1 00:05:46.058 --rc geninfo_all_blocks=1 00:05:46.058 --rc geninfo_unexecuted_blocks=1 00:05:46.058 00:05:46.058 ' 00:05:46.058 14:10:13 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:46.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.058 --rc genhtml_branch_coverage=1 00:05:46.058 --rc genhtml_function_coverage=1 00:05:46.058 --rc genhtml_legend=1 00:05:46.058 --rc geninfo_all_blocks=1 00:05:46.058 --rc geninfo_unexecuted_blocks=1 00:05:46.058 00:05:46.058 ' 00:05:46.058 14:10:13 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:46.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.058 --rc genhtml_branch_coverage=1 00:05:46.058 --rc genhtml_function_coverage=1 00:05:46.058 --rc genhtml_legend=1 00:05:46.058 --rc geninfo_all_blocks=1 00:05:46.058 --rc geninfo_unexecuted_blocks=1 00:05:46.058 00:05:46.058 ' 00:05:46.058 14:10:13 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:46.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.058 --rc genhtml_branch_coverage=1 00:05:46.058 --rc genhtml_function_coverage=1 00:05:46.058 --rc genhtml_legend=1 00:05:46.058 --rc geninfo_all_blocks=1 00:05:46.058 --rc geninfo_unexecuted_blocks=1 00:05:46.058 00:05:46.058 ' 00:05:46.058 14:10:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:46.058 14:10:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59976 00:05:46.058 14:10:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59976 00:05:46.058 14:10:13 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 59976 ']' 00:05:46.058 14:10:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:46.058 14:10:13 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.058 14:10:13 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:46.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.058 14:10:13 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.058 14:10:13 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:46.058 14:10:13 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:46.058 [2024-11-06 14:10:13.635423] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:05:46.058 [2024-11-06 14:10:13.635551] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59976 ] 00:05:46.317 [2024-11-06 14:10:13.796008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.317 [2024-11-06 14:10:13.896236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.914 14:10:14 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:46.914 14:10:14 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:05:46.914 14:10:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:46.914 14:10:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:46.914 14:10:14 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.914 14:10:14 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:46.914 { 00:05:46.914 "filename": "/tmp/spdk_mem_dump.txt" 00:05:46.914 } 00:05:46.914 14:10:14 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.914 14:10:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:46.914 DPDK memory size 816.000000 MiB in 1 heap(s) 00:05:46.914 1 heaps totaling size 816.000000 MiB 00:05:46.914 size: 816.000000 MiB heap id: 0 00:05:46.914 end heaps---------- 00:05:46.914 9 mempools totaling size 595.772034 MiB 00:05:46.914 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:46.914 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:46.914 size: 92.545471 MiB name: bdev_io_59976 00:05:46.914 size: 50.003479 MiB name: msgpool_59976 00:05:46.914 size: 36.509338 MiB name: fsdev_io_59976 00:05:46.914 size: 21.763794 MiB name: PDU_Pool 00:05:46.914 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:46.914 size: 4.133484 MiB name: evtpool_59976 00:05:46.914 size: 0.026123 MiB name: Session_Pool 00:05:46.914 end mempools------- 00:05:46.914 6 memzones totaling size 4.142822 MiB 00:05:46.914 size: 1.000366 MiB name: RG_ring_0_59976 00:05:46.914 size: 1.000366 MiB name: RG_ring_1_59976 00:05:46.914 size: 1.000366 MiB name: RG_ring_4_59976 00:05:46.914 size: 1.000366 MiB name: RG_ring_5_59976 00:05:46.914 size: 0.125366 MiB name: RG_ring_2_59976 00:05:46.914 size: 0.015991 MiB name: RG_ring_3_59976 00:05:46.914 end memzones------- 00:05:47.174 14:10:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:47.174 heap id: 0 total size: 816.000000 MiB number of busy elements: 236 number of free elements: 18 00:05:47.174 list of free elements. size: 16.810791 MiB 00:05:47.174 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:47.174 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:47.174 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:47.174 element at address: 0x200018d00040 with size: 0.999939 MiB 00:05:47.174 element at address: 0x200019100040 with size: 0.999939 MiB 00:05:47.174 element at address: 0x200019200000 with size: 0.999084 MiB 00:05:47.174 element at address: 0x200000400000 with size: 0.994934 MiB 00:05:47.174 element at address: 0x200031e00000 with size: 0.994324 MiB 00:05:47.174 element at address: 0x200018a00000 with size: 0.959656 MiB 00:05:47.174 element at address: 0x200019500040 with size: 0.937012 MiB 00:05:47.174 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:47.174 element at address: 0x20001ac00000 with size: 0.568542 MiB 00:05:47.174 element at address: 0x200000c00000 with size: 0.495300 MiB 00:05:47.174 element at address: 0x200018e00000 with size: 0.488464 MiB 00:05:47.174 element at address: 0x200019600000 with size: 0.485413 MiB 00:05:47.174 element at address: 0x200012c00000 with size: 0.444214 MiB 00:05:47.174 element at address: 0x200028000000 with size: 0.390930 MiB 00:05:47.174 element at address: 0x200000800000 with size: 0.353088 MiB 00:05:47.174 list of standard malloc elements. size: 199.268311 MiB 00:05:47.174 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:47.174 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:47.174 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:05:47.174 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:05:47.174 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:47.174 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:47.174 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:05:47.174 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:47.174 element at address: 0x20000a5ff180 with size: 0.000366 MiB 00:05:47.174 element at address: 0x20000a5ff040 with size: 0.000305 MiB 00:05:47.174 element at address: 0x200012bff040 with size: 0.000305 MiB 00:05:47.174 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:47.174 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:47.174 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:05:47.174 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:05:47.174 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:05:47.174 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:05:47.174 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:05:47.174 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:05:47.174 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:05:47.174 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:05:47.174 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:05:47.174 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:05:47.174 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:05:47.174 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:05:47.174 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:05:47.174 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:47.174 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:47.174 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:05:47.174 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:47.174 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:47.174 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:05:47.174 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:05:47.174 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:05:47.174 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:05:47.174 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:05:47.174 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:05:47.174 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:05:47.174 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:05:47.174 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:05:47.174 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:47.174 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:47.174 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:47.174 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:47.174 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:47.174 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:47.174 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:05:47.174 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:05:47.174 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:05:47.174 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:05:47.174 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:05:47.174 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:05:47.174 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:05:47.174 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:05:47.174 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:05:47.174 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:47.174 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:47.174 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:47.174 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:47.174 element at address: 0x200012bff180 with size: 0.000244 MiB 00:05:47.174 element at address: 0x200012bff280 with size: 0.000244 MiB 00:05:47.174 element at address: 0x200012bff380 with size: 0.000244 MiB 00:05:47.174 element at address: 0x200012bff480 with size: 0.000244 MiB 00:05:47.174 element at address: 0x200012bff580 with size: 0.000244 MiB 00:05:47.174 element at address: 0x200012bff680 with size: 0.000244 MiB 00:05:47.174 element at address: 0x200012bff780 with size: 0.000244 MiB 00:05:47.174 element at address: 0x200012bff880 with size: 0.000244 MiB 00:05:47.174 element at address: 0x200012bff980 with size: 0.000244 MiB 00:05:47.174 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:05:47.174 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:47.174 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:47.174 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:47.174 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:05:47.174 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:05:47.174 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:05:47.174 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:05:47.174 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:05:47.174 element at address: 0x200012c72080 with size: 0.000244 MiB 00:05:47.174 element at address: 0x200012c72180 with size: 0.000244 MiB 00:05:47.174 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:05:47.174 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:05:47.174 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:05:47.174 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:05:47.174 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:05:47.174 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:05:47.174 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:05:47.174 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:05:47.174 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:05:47.174 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:05:47.174 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:05:47.174 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:05:47.174 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:05:47.174 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:05:47.174 element at address: 0x2000195efe40 with size: 0.000244 MiB 00:05:47.174 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:05:47.174 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:05:47.174 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:05:47.174 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:05:47.174 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:05:47.174 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:05:47.174 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:05:47.174 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:05:47.174 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:05:47.174 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:05:47.175 element at address: 0x200028064140 with size: 0.000244 MiB 00:05:47.175 element at address: 0x200028064240 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20002806af00 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20002806b180 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20002806b280 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20002806b380 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20002806b480 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20002806b580 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20002806b680 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20002806b780 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20002806b880 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20002806b980 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20002806be80 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20002806c080 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20002806c180 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20002806c280 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20002806c380 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20002806c480 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20002806c580 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20002806c680 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20002806c780 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20002806c880 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20002806c980 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20002806d080 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20002806d180 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20002806d280 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20002806d380 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20002806d480 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20002806d580 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20002806d680 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20002806d780 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20002806d880 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20002806d980 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20002806da80 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20002806db80 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20002806de80 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20002806df80 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20002806e080 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20002806e180 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20002806e280 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20002806e380 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20002806e480 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20002806e580 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20002806e680 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20002806e780 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20002806e880 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20002806e980 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20002806f080 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20002806f180 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20002806f280 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20002806f380 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20002806f480 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20002806f580 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20002806f680 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20002806f780 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20002806f880 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20002806f980 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:05:47.175 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:05:47.175 list of memzone associated elements. size: 599.920898 MiB 00:05:47.175 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:05:47.175 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:47.175 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:05:47.175 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:47.175 element at address: 0x200012df4740 with size: 92.045105 MiB 00:05:47.175 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_59976_0 00:05:47.175 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:47.175 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59976_0 00:05:47.175 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:47.175 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_59976_0 00:05:47.175 element at address: 0x2000197be900 with size: 20.255615 MiB 00:05:47.175 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:47.175 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:05:47.175 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:47.175 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:47.175 associated memzone info: size: 3.000122 MiB name: MP_evtpool_59976_0 00:05:47.176 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:47.176 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59976 00:05:47.176 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:47.176 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59976 00:05:47.176 element at address: 0x200018efde00 with size: 1.008179 MiB 00:05:47.176 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:47.176 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:05:47.176 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:47.176 element at address: 0x200018afde00 with size: 1.008179 MiB 00:05:47.176 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:47.176 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:05:47.176 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:47.176 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:47.176 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59976 00:05:47.176 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:47.176 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59976 00:05:47.176 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:05:47.176 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59976 00:05:47.176 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:05:47.176 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59976 00:05:47.176 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:47.176 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_59976 00:05:47.176 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:47.176 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59976 00:05:47.176 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:05:47.176 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:47.176 element at address: 0x200012c72280 with size: 0.500549 MiB 00:05:47.176 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:47.176 element at address: 0x20001967c440 with size: 0.250549 MiB 00:05:47.176 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:47.176 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:47.176 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_59976 00:05:47.176 element at address: 0x20000085e880 with size: 0.125549 MiB 00:05:47.176 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59976 00:05:47.176 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:05:47.176 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:47.176 element at address: 0x200028064340 with size: 0.023804 MiB 00:05:47.176 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:47.176 element at address: 0x20000085a640 with size: 0.016174 MiB 00:05:47.176 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59976 00:05:47.176 element at address: 0x20002806a4c0 with size: 0.002502 MiB 00:05:47.176 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:47.176 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:05:47.176 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59976 00:05:47.176 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:47.176 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_59976 00:05:47.176 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:47.176 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59976 00:05:47.176 element at address: 0x20002806b000 with size: 0.000366 MiB 00:05:47.176 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:47.176 14:10:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:47.176 14:10:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59976 00:05:47.176 14:10:14 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 59976 ']' 00:05:47.176 14:10:14 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 59976 00:05:47.176 14:10:14 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:05:47.176 14:10:14 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:47.176 14:10:14 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59976 00:05:47.176 killing process with pid 59976 00:05:47.176 14:10:14 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:47.176 14:10:14 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:47.176 14:10:14 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59976' 00:05:47.176 14:10:14 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 59976 00:05:47.176 14:10:14 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 59976 00:05:48.562 00:05:48.562 real 0m2.724s 00:05:48.562 user 0m2.747s 00:05:48.562 sys 0m0.405s 00:05:48.562 ************************************ 00:05:48.562 END TEST dpdk_mem_utility 00:05:48.562 14:10:16 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:48.562 14:10:16 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:48.562 ************************************ 00:05:48.562 14:10:16 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:48.562 14:10:16 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:48.562 14:10:16 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:48.562 14:10:16 -- common/autotest_common.sh@10 -- # set +x 00:05:48.562 ************************************ 00:05:48.562 START TEST event 00:05:48.562 ************************************ 00:05:48.562 14:10:16 event -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:48.819 * Looking for test storage... 00:05:48.819 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:48.819 14:10:16 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:48.819 14:10:16 event -- common/autotest_common.sh@1691 -- # lcov --version 00:05:48.819 14:10:16 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:48.819 14:10:16 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:48.819 14:10:16 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:48.819 14:10:16 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:48.819 14:10:16 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:48.819 14:10:16 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:48.819 14:10:16 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:48.819 14:10:16 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:48.819 14:10:16 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:48.819 14:10:16 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:48.819 14:10:16 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:48.819 14:10:16 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:48.819 14:10:16 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:48.819 14:10:16 event -- scripts/common.sh@344 -- # case "$op" in 00:05:48.819 14:10:16 event -- scripts/common.sh@345 -- # : 1 00:05:48.819 14:10:16 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:48.819 14:10:16 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:48.820 14:10:16 event -- scripts/common.sh@365 -- # decimal 1 00:05:48.820 14:10:16 event -- scripts/common.sh@353 -- # local d=1 00:05:48.820 14:10:16 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:48.820 14:10:16 event -- scripts/common.sh@355 -- # echo 1 00:05:48.820 14:10:16 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:48.820 14:10:16 event -- scripts/common.sh@366 -- # decimal 2 00:05:48.820 14:10:16 event -- scripts/common.sh@353 -- # local d=2 00:05:48.820 14:10:16 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:48.820 14:10:16 event -- scripts/common.sh@355 -- # echo 2 00:05:48.820 14:10:16 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:48.820 14:10:16 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:48.820 14:10:16 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:48.820 14:10:16 event -- scripts/common.sh@368 -- # return 0 00:05:48.820 14:10:16 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:48.820 14:10:16 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:48.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.820 --rc genhtml_branch_coverage=1 00:05:48.820 --rc genhtml_function_coverage=1 00:05:48.820 --rc genhtml_legend=1 00:05:48.820 --rc geninfo_all_blocks=1 00:05:48.820 --rc geninfo_unexecuted_blocks=1 00:05:48.820 00:05:48.820 ' 00:05:48.820 14:10:16 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:48.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.820 --rc genhtml_branch_coverage=1 00:05:48.820 --rc genhtml_function_coverage=1 00:05:48.820 --rc genhtml_legend=1 00:05:48.820 --rc geninfo_all_blocks=1 00:05:48.820 --rc geninfo_unexecuted_blocks=1 00:05:48.820 00:05:48.820 ' 00:05:48.820 14:10:16 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:48.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.820 --rc genhtml_branch_coverage=1 00:05:48.820 --rc genhtml_function_coverage=1 00:05:48.820 --rc genhtml_legend=1 00:05:48.820 --rc geninfo_all_blocks=1 00:05:48.820 --rc geninfo_unexecuted_blocks=1 00:05:48.820 00:05:48.820 ' 00:05:48.820 14:10:16 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:48.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.820 --rc genhtml_branch_coverage=1 00:05:48.820 --rc genhtml_function_coverage=1 00:05:48.820 --rc genhtml_legend=1 00:05:48.820 --rc geninfo_all_blocks=1 00:05:48.820 --rc geninfo_unexecuted_blocks=1 00:05:48.820 00:05:48.820 ' 00:05:48.820 14:10:16 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:48.820 14:10:16 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:48.820 14:10:16 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:48.820 14:10:16 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:05:48.820 14:10:16 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:48.820 14:10:16 event -- common/autotest_common.sh@10 -- # set +x 00:05:48.820 ************************************ 00:05:48.820 START TEST event_perf 00:05:48.820 ************************************ 00:05:48.820 14:10:16 event.event_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:48.820 Running I/O for 1 seconds...[2024-11-06 14:10:16.341757] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:05:48.820 [2024-11-06 14:10:16.341868] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60091 ] 00:05:49.078 [2024-11-06 14:10:16.502496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:49.078 [2024-11-06 14:10:16.606899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.078 [2024-11-06 14:10:16.607733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:49.078 [2024-11-06 14:10:16.607968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.078 Running I/O for 1 seconds...[2024-11-06 14:10:16.608016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:50.451 00:05:50.451 lcore 0: 175293 00:05:50.451 lcore 1: 175295 00:05:50.451 lcore 2: 175296 00:05:50.451 lcore 3: 175293 00:05:50.451 done. 00:05:50.451 00:05:50.451 real 0m1.474s 00:05:50.451 user 0m4.267s 00:05:50.451 sys 0m0.084s 00:05:50.451 14:10:17 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:50.451 14:10:17 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:50.451 ************************************ 00:05:50.451 END TEST event_perf 00:05:50.451 ************************************ 00:05:50.451 14:10:17 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:50.451 14:10:17 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:05:50.451 14:10:17 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:50.451 14:10:17 event -- common/autotest_common.sh@10 -- # set +x 00:05:50.451 ************************************ 00:05:50.451 START TEST event_reactor 00:05:50.451 ************************************ 00:05:50.451 14:10:17 event.event_reactor -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:50.452 [2024-11-06 14:10:17.854798] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:05:50.452 [2024-11-06 14:10:17.854920] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60125 ] 00:05:50.452 [2024-11-06 14:10:18.014458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.709 [2024-11-06 14:10:18.129614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.081 test_start 00:05:52.081 oneshot 00:05:52.081 tick 100 00:05:52.081 tick 100 00:05:52.081 tick 250 00:05:52.081 tick 100 00:05:52.081 tick 100 00:05:52.081 tick 100 00:05:52.081 tick 250 00:05:52.081 tick 500 00:05:52.081 tick 100 00:05:52.081 tick 100 00:05:52.081 tick 250 00:05:52.081 tick 100 00:05:52.081 tick 100 00:05:52.081 test_end 00:05:52.081 00:05:52.081 real 0m1.472s 00:05:52.081 user 0m1.303s 00:05:52.081 sys 0m0.060s 00:05:52.081 14:10:19 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:52.081 14:10:19 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:52.081 ************************************ 00:05:52.081 END TEST event_reactor 00:05:52.081 ************************************ 00:05:52.081 14:10:19 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:52.081 14:10:19 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:05:52.081 14:10:19 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:52.081 14:10:19 event -- common/autotest_common.sh@10 -- # set +x 00:05:52.081 ************************************ 00:05:52.081 START TEST event_reactor_perf 00:05:52.081 ************************************ 00:05:52.081 14:10:19 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:52.081 [2024-11-06 14:10:19.368691] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:05:52.081 [2024-11-06 14:10:19.368826] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60167 ] 00:05:52.081 [2024-11-06 14:10:19.530330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.081 [2024-11-06 14:10:19.698483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.476 test_start 00:05:53.476 test_end 00:05:53.476 Performance: 309188 events per second 00:05:53.476 00:05:53.476 real 0m1.520s 00:05:53.476 user 0m1.335s 00:05:53.476 sys 0m0.076s 00:05:53.476 14:10:20 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:53.477 ************************************ 00:05:53.477 END TEST event_reactor_perf 00:05:53.477 ************************************ 00:05:53.477 14:10:20 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:53.477 14:10:20 event -- event/event.sh@49 -- # uname -s 00:05:53.477 14:10:20 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:53.477 14:10:20 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:53.477 14:10:20 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:53.477 14:10:20 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:53.477 14:10:20 event -- common/autotest_common.sh@10 -- # set +x 00:05:53.477 ************************************ 00:05:53.477 START TEST event_scheduler 00:05:53.477 ************************************ 00:05:53.477 14:10:20 event.event_scheduler -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:53.477 * Looking for test storage... 00:05:53.477 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:53.477 14:10:20 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:53.477 14:10:20 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:05:53.477 14:10:20 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:53.477 14:10:21 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:53.477 14:10:21 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:53.477 14:10:21 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:53.477 14:10:21 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:53.477 14:10:21 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:53.477 14:10:21 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:53.477 14:10:21 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:53.477 14:10:21 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:53.477 14:10:21 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:53.477 14:10:21 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:53.477 14:10:21 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:53.477 14:10:21 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:53.477 14:10:21 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:53.477 14:10:21 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:53.477 14:10:21 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:53.477 14:10:21 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:53.477 14:10:21 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:53.477 14:10:21 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:53.477 14:10:21 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:53.477 14:10:21 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:53.477 14:10:21 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:53.477 14:10:21 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:53.477 14:10:21 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:53.477 14:10:21 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:53.477 14:10:21 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:53.477 14:10:21 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:53.477 14:10:21 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:53.477 14:10:21 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:53.477 14:10:21 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:53.477 14:10:21 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:53.477 14:10:21 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:53.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.477 --rc genhtml_branch_coverage=1 00:05:53.477 --rc genhtml_function_coverage=1 00:05:53.477 --rc genhtml_legend=1 00:05:53.477 --rc geninfo_all_blocks=1 00:05:53.477 --rc geninfo_unexecuted_blocks=1 00:05:53.477 00:05:53.477 ' 00:05:53.477 14:10:21 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:53.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.477 --rc genhtml_branch_coverage=1 00:05:53.477 --rc genhtml_function_coverage=1 00:05:53.477 --rc genhtml_legend=1 00:05:53.477 --rc geninfo_all_blocks=1 00:05:53.477 --rc geninfo_unexecuted_blocks=1 00:05:53.477 00:05:53.477 ' 00:05:53.477 14:10:21 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:53.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.477 --rc genhtml_branch_coverage=1 00:05:53.477 --rc genhtml_function_coverage=1 00:05:53.477 --rc genhtml_legend=1 00:05:53.477 --rc geninfo_all_blocks=1 00:05:53.477 --rc geninfo_unexecuted_blocks=1 00:05:53.477 00:05:53.477 ' 00:05:53.477 14:10:21 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:53.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.477 --rc genhtml_branch_coverage=1 00:05:53.477 --rc genhtml_function_coverage=1 00:05:53.477 --rc genhtml_legend=1 00:05:53.477 --rc geninfo_all_blocks=1 00:05:53.477 --rc geninfo_unexecuted_blocks=1 00:05:53.477 00:05:53.477 ' 00:05:53.477 14:10:21 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:53.477 14:10:21 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=60236 00:05:53.477 14:10:21 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:53.477 14:10:21 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 60236 00:05:53.477 14:10:21 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 60236 ']' 00:05:53.477 14:10:21 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.477 14:10:21 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:53.477 14:10:21 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:53.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.477 14:10:21 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.477 14:10:21 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:53.477 14:10:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:53.477 [2024-11-06 14:10:21.102614] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:05:53.477 [2024-11-06 14:10:21.102742] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60236 ] 00:05:53.735 [2024-11-06 14:10:21.261850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:53.735 [2024-11-06 14:10:21.363819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.735 [2024-11-06 14:10:21.364190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.735 [2024-11-06 14:10:21.364268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:53.735 [2024-11-06 14:10:21.364872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:54.669 14:10:21 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:54.669 14:10:21 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:05:54.669 14:10:21 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:54.669 14:10:21 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.669 14:10:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:54.669 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:54.669 POWER: Cannot set governor of lcore 0 to userspace 00:05:54.669 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:54.669 POWER: Cannot set governor of lcore 0 to performance 00:05:54.669 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:54.669 POWER: Cannot set governor of lcore 0 to userspace 00:05:54.669 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:54.669 POWER: Cannot set governor of lcore 0 to userspace 00:05:54.669 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:54.669 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:54.669 POWER: Unable to set Power Management Environment for lcore 0 00:05:54.669 [2024-11-06 14:10:21.990287] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:54.669 [2024-11-06 14:10:21.990399] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:54.669 [2024-11-06 14:10:21.990410] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:54.669 [2024-11-06 14:10:21.990427] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:54.669 [2024-11-06 14:10:21.990517] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:54.669 [2024-11-06 14:10:21.990526] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:54.669 14:10:21 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.669 14:10:21 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:54.669 14:10:21 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.669 14:10:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:54.669 [2024-11-06 14:10:22.214897] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:54.669 14:10:22 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.669 14:10:22 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:54.669 14:10:22 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:54.669 14:10:22 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:54.669 14:10:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:54.669 ************************************ 00:05:54.669 START TEST scheduler_create_thread 00:05:54.669 ************************************ 00:05:54.670 14:10:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:05:54.670 14:10:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:54.670 14:10:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.670 14:10:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.670 2 00:05:54.670 14:10:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.670 14:10:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:54.670 14:10:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.670 14:10:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.670 3 00:05:54.670 14:10:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.670 14:10:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:54.670 14:10:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.670 14:10:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.670 4 00:05:54.670 14:10:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.670 14:10:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:54.670 14:10:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.670 14:10:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.670 5 00:05:54.670 14:10:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.670 14:10:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:54.670 14:10:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.670 14:10:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.670 6 00:05:54.670 14:10:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.670 14:10:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:54.670 14:10:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.670 14:10:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.670 7 00:05:54.670 14:10:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.670 14:10:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:54.670 14:10:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.670 14:10:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.670 8 00:05:54.670 14:10:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.670 14:10:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:54.670 14:10:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.670 14:10:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.670 9 00:05:54.670 14:10:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.670 14:10:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:54.670 14:10:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.670 14:10:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.670 10 00:05:54.670 14:10:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.670 14:10:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:54.670 14:10:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.670 14:10:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.928 14:10:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.928 14:10:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:54.928 14:10:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:54.928 14:10:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.928 14:10:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.928 14:10:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.928 14:10:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:54.928 14:10:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.928 14:10:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.928 14:10:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.928 14:10:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:54.928 14:10:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:54.928 14:10:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.928 14:10:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.186 14:10:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.186 00:05:55.186 real 0m0.592s 00:05:55.186 user 0m0.013s 00:05:55.186 sys 0m0.003s 00:05:55.186 14:10:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:55.186 14:10:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.186 ************************************ 00:05:55.186 END TEST scheduler_create_thread 00:05:55.186 ************************************ 00:05:55.444 14:10:22 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:55.444 14:10:22 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 60236 00:05:55.444 14:10:22 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 60236 ']' 00:05:55.444 14:10:22 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 60236 00:05:55.444 14:10:22 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:05:55.444 14:10:22 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:55.444 14:10:22 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60236 00:05:55.444 14:10:22 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:05:55.444 14:10:22 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:05:55.444 killing process with pid 60236 00:05:55.444 14:10:22 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60236' 00:05:55.444 14:10:22 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 60236 00:05:55.444 14:10:22 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 60236 00:05:55.702 [2024-11-06 14:10:23.296029] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:56.635 00:05:56.635 real 0m3.135s 00:05:56.635 user 0m6.125s 00:05:56.635 sys 0m0.368s 00:05:56.635 14:10:24 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:56.635 14:10:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:56.635 ************************************ 00:05:56.635 END TEST event_scheduler 00:05:56.635 ************************************ 00:05:56.635 14:10:24 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:56.635 14:10:24 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:56.635 14:10:24 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:56.635 14:10:24 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:56.635 14:10:24 event -- common/autotest_common.sh@10 -- # set +x 00:05:56.635 ************************************ 00:05:56.635 START TEST app_repeat 00:05:56.635 ************************************ 00:05:56.635 14:10:24 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:05:56.635 14:10:24 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.636 14:10:24 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.636 14:10:24 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:56.636 14:10:24 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:56.636 14:10:24 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:56.636 14:10:24 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:56.636 14:10:24 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:56.636 14:10:24 event.app_repeat -- event/event.sh@19 -- # repeat_pid=60334 00:05:56.636 14:10:24 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:56.636 Process app_repeat pid: 60334 00:05:56.636 14:10:24 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 60334' 00:05:56.636 14:10:24 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:56.636 spdk_app_start Round 0 00:05:56.636 14:10:24 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:56.636 14:10:24 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60334 /var/tmp/spdk-nbd.sock 00:05:56.636 14:10:24 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 60334 ']' 00:05:56.636 14:10:24 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:56.636 14:10:24 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:56.636 14:10:24 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:56.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:56.636 14:10:24 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:56.636 14:10:24 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:56.636 14:10:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:56.636 [2024-11-06 14:10:24.117685] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:05:56.636 [2024-11-06 14:10:24.117779] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60334 ] 00:05:56.636 [2024-11-06 14:10:24.267488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:56.894 [2024-11-06 14:10:24.351766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.894 [2024-11-06 14:10:24.351777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.466 14:10:24 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:57.466 14:10:24 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:57.466 14:10:24 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:57.724 Malloc0 00:05:57.724 14:10:25 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:57.982 Malloc1 00:05:57.982 14:10:25 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:57.982 14:10:25 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.982 14:10:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:57.982 14:10:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:57.982 14:10:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.982 14:10:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:57.982 14:10:25 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:57.982 14:10:25 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.982 14:10:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:57.982 14:10:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:57.982 14:10:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.982 14:10:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:57.982 14:10:25 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:57.982 14:10:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:57.982 14:10:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:57.982 14:10:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:58.240 /dev/nbd0 00:05:58.240 14:10:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:58.240 14:10:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:58.240 14:10:25 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:05:58.240 14:10:25 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:58.240 14:10:25 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:58.240 14:10:25 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:58.240 14:10:25 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:05:58.240 14:10:25 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:58.240 14:10:25 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:58.240 14:10:25 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:58.240 14:10:25 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:58.240 1+0 records in 00:05:58.240 1+0 records out 00:05:58.240 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000269767 s, 15.2 MB/s 00:05:58.240 14:10:25 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:58.240 14:10:25 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:58.240 14:10:25 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:58.240 14:10:25 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:58.240 14:10:25 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:58.240 14:10:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:58.240 14:10:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:58.241 14:10:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:58.499 /dev/nbd1 00:05:58.499 14:10:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:58.499 14:10:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:58.499 14:10:25 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:05:58.499 14:10:25 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:58.499 14:10:25 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:58.499 14:10:25 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:58.499 14:10:25 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:05:58.499 14:10:25 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:58.499 14:10:25 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:58.499 14:10:25 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:58.499 14:10:25 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:58.499 1+0 records in 00:05:58.499 1+0 records out 00:05:58.499 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000229435 s, 17.9 MB/s 00:05:58.499 14:10:25 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:58.499 14:10:25 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:58.499 14:10:25 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:58.499 14:10:25 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:58.499 14:10:25 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:58.499 14:10:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:58.499 14:10:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:58.499 14:10:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:58.499 14:10:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.499 14:10:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:58.757 14:10:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:58.757 { 00:05:58.757 "bdev_name": "Malloc0", 00:05:58.757 "nbd_device": "/dev/nbd0" 00:05:58.757 }, 00:05:58.757 { 00:05:58.757 "bdev_name": "Malloc1", 00:05:58.757 "nbd_device": "/dev/nbd1" 00:05:58.757 } 00:05:58.757 ]' 00:05:58.757 14:10:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:58.757 { 00:05:58.757 "bdev_name": "Malloc0", 00:05:58.757 "nbd_device": "/dev/nbd0" 00:05:58.757 }, 00:05:58.757 { 00:05:58.757 "bdev_name": "Malloc1", 00:05:58.757 "nbd_device": "/dev/nbd1" 00:05:58.757 } 00:05:58.757 ]' 00:05:58.757 14:10:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:58.757 14:10:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:58.757 /dev/nbd1' 00:05:58.757 14:10:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:58.757 /dev/nbd1' 00:05:58.757 14:10:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:58.757 14:10:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:58.757 14:10:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:58.757 14:10:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:58.757 14:10:26 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:58.757 14:10:26 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:58.757 14:10:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.757 14:10:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:58.757 14:10:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:58.757 14:10:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:58.757 14:10:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:58.757 14:10:26 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:58.757 256+0 records in 00:05:58.757 256+0 records out 00:05:58.757 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00471384 s, 222 MB/s 00:05:58.757 14:10:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:58.757 14:10:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:58.757 256+0 records in 00:05:58.757 256+0 records out 00:05:58.757 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0153323 s, 68.4 MB/s 00:05:58.757 14:10:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:58.757 14:10:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:58.757 256+0 records in 00:05:58.757 256+0 records out 00:05:58.757 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0197439 s, 53.1 MB/s 00:05:58.757 14:10:26 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:58.757 14:10:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.757 14:10:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:58.757 14:10:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:58.757 14:10:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:58.757 14:10:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:58.757 14:10:26 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:58.757 14:10:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:58.757 14:10:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:58.757 14:10:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:58.757 14:10:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:58.757 14:10:26 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:58.757 14:10:26 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:58.757 14:10:26 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.757 14:10:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.757 14:10:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:58.757 14:10:26 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:58.757 14:10:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:58.757 14:10:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:59.015 14:10:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:59.015 14:10:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:59.015 14:10:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:59.015 14:10:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:59.015 14:10:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:59.015 14:10:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:59.015 14:10:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:59.015 14:10:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:59.015 14:10:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:59.015 14:10:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:59.272 14:10:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:59.272 14:10:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:59.272 14:10:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:59.272 14:10:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:59.272 14:10:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:59.272 14:10:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:59.272 14:10:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:59.272 14:10:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:59.272 14:10:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:59.272 14:10:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.272 14:10:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:59.530 14:10:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:59.530 14:10:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:59.530 14:10:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:59.530 14:10:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:59.530 14:10:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:59.530 14:10:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:59.530 14:10:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:59.530 14:10:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:59.530 14:10:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:59.530 14:10:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:59.530 14:10:26 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:59.530 14:10:26 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:59.530 14:10:26 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:59.787 14:10:27 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:00.722 [2024-11-06 14:10:28.059348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:00.722 [2024-11-06 14:10:28.169277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.722 [2024-11-06 14:10:28.169539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.722 [2024-11-06 14:10:28.298039] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:00.722 [2024-11-06 14:10:28.298090] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:02.755 14:10:30 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:02.755 spdk_app_start Round 1 00:06:02.755 14:10:30 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:02.755 14:10:30 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60334 /var/tmp/spdk-nbd.sock 00:06:02.755 14:10:30 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 60334 ']' 00:06:02.755 14:10:30 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:02.755 14:10:30 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:02.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:02.755 14:10:30 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:02.755 14:10:30 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:02.755 14:10:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:03.012 14:10:30 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:03.012 14:10:30 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:06:03.012 14:10:30 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:03.269 Malloc0 00:06:03.269 14:10:30 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:03.527 Malloc1 00:06:03.527 14:10:30 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:03.527 14:10:30 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.527 14:10:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:03.527 14:10:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:03.527 14:10:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.527 14:10:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:03.527 14:10:30 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:03.527 14:10:30 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.527 14:10:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:03.527 14:10:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:03.527 14:10:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.527 14:10:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:03.527 14:10:30 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:03.527 14:10:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:03.527 14:10:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:03.527 14:10:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:03.785 /dev/nbd0 00:06:03.786 14:10:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:03.786 14:10:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:03.786 14:10:31 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:06:03.786 14:10:31 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:03.786 14:10:31 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:03.786 14:10:31 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:03.786 14:10:31 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:06:03.786 14:10:31 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:03.786 14:10:31 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:03.786 14:10:31 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:03.786 14:10:31 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:03.786 1+0 records in 00:06:03.786 1+0 records out 00:06:03.786 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000180497 s, 22.7 MB/s 00:06:03.786 14:10:31 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:03.786 14:10:31 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:03.786 14:10:31 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:03.786 14:10:31 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:03.786 14:10:31 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:03.786 14:10:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:03.786 14:10:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:03.786 14:10:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:04.043 /dev/nbd1 00:06:04.043 14:10:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:04.043 14:10:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:04.043 14:10:31 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:06:04.043 14:10:31 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:04.043 14:10:31 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:04.043 14:10:31 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:04.043 14:10:31 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:06:04.043 14:10:31 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:04.043 14:10:31 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:04.043 14:10:31 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:04.043 14:10:31 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:04.043 1+0 records in 00:06:04.043 1+0 records out 00:06:04.043 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000221055 s, 18.5 MB/s 00:06:04.043 14:10:31 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:04.043 14:10:31 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:04.043 14:10:31 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:04.043 14:10:31 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:04.043 14:10:31 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:04.043 14:10:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:04.043 14:10:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:04.043 14:10:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:04.043 14:10:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.043 14:10:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:04.301 14:10:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:04.301 { 00:06:04.301 "bdev_name": "Malloc0", 00:06:04.301 "nbd_device": "/dev/nbd0" 00:06:04.301 }, 00:06:04.301 { 00:06:04.301 "bdev_name": "Malloc1", 00:06:04.301 "nbd_device": "/dev/nbd1" 00:06:04.301 } 00:06:04.301 ]' 00:06:04.301 14:10:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:04.301 { 00:06:04.301 "bdev_name": "Malloc0", 00:06:04.301 "nbd_device": "/dev/nbd0" 00:06:04.301 }, 00:06:04.301 { 00:06:04.301 "bdev_name": "Malloc1", 00:06:04.301 "nbd_device": "/dev/nbd1" 00:06:04.301 } 00:06:04.301 ]' 00:06:04.301 14:10:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:04.301 14:10:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:04.301 /dev/nbd1' 00:06:04.301 14:10:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:04.301 /dev/nbd1' 00:06:04.301 14:10:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:04.301 14:10:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:04.301 14:10:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:04.301 14:10:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:04.301 14:10:31 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:04.301 14:10:31 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:04.301 14:10:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.301 14:10:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:04.301 14:10:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:04.301 14:10:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:04.301 14:10:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:04.301 14:10:31 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:04.301 256+0 records in 00:06:04.301 256+0 records out 00:06:04.301 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0118683 s, 88.4 MB/s 00:06:04.301 14:10:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:04.301 14:10:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:04.301 256+0 records in 00:06:04.301 256+0 records out 00:06:04.301 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0156226 s, 67.1 MB/s 00:06:04.301 14:10:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:04.301 14:10:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:04.301 256+0 records in 00:06:04.301 256+0 records out 00:06:04.301 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0174072 s, 60.2 MB/s 00:06:04.301 14:10:31 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:04.301 14:10:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.301 14:10:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:04.301 14:10:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:04.301 14:10:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:04.301 14:10:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:04.302 14:10:31 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:04.302 14:10:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:04.302 14:10:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:04.302 14:10:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:04.302 14:10:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:04.302 14:10:31 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:04.302 14:10:31 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:04.302 14:10:31 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.302 14:10:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.302 14:10:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:04.302 14:10:31 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:04.302 14:10:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:04.302 14:10:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:04.559 14:10:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:04.559 14:10:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:04.559 14:10:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:04.559 14:10:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:04.559 14:10:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:04.559 14:10:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:04.559 14:10:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:04.559 14:10:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:04.559 14:10:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:04.559 14:10:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:04.851 14:10:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:04.851 14:10:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:04.851 14:10:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:04.851 14:10:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:04.851 14:10:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:04.851 14:10:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:04.851 14:10:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:04.851 14:10:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:04.851 14:10:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:04.851 14:10:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.851 14:10:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:04.851 14:10:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:04.851 14:10:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:04.851 14:10:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:05.109 14:10:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:05.109 14:10:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:05.109 14:10:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:05.109 14:10:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:05.109 14:10:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:05.109 14:10:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:05.109 14:10:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:05.109 14:10:32 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:05.109 14:10:32 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:05.109 14:10:32 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:05.366 14:10:32 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:05.931 [2024-11-06 14:10:33.344028] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:05.931 [2024-11-06 14:10:33.425676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.931 [2024-11-06 14:10:33.425922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.931 [2024-11-06 14:10:33.528566] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:05.931 [2024-11-06 14:10:33.528627] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:08.474 spdk_app_start Round 2 00:06:08.474 14:10:35 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:08.474 14:10:35 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:08.474 14:10:35 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60334 /var/tmp/spdk-nbd.sock 00:06:08.474 14:10:35 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 60334 ']' 00:06:08.474 14:10:35 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:08.474 14:10:35 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:08.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:08.474 14:10:35 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:08.474 14:10:35 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:08.474 14:10:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:08.474 14:10:36 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:08.474 14:10:36 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:06:08.474 14:10:36 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:08.732 Malloc0 00:06:08.732 14:10:36 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:08.989 Malloc1 00:06:08.989 14:10:36 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:08.989 14:10:36 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.989 14:10:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:08.990 14:10:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:08.990 14:10:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.990 14:10:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:08.990 14:10:36 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:08.990 14:10:36 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.990 14:10:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:08.990 14:10:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:08.990 14:10:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.990 14:10:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:08.990 14:10:36 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:08.990 14:10:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:08.990 14:10:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:08.990 14:10:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:09.248 /dev/nbd0 00:06:09.248 14:10:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:09.248 14:10:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:09.248 14:10:36 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:06:09.248 14:10:36 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:09.248 14:10:36 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:09.248 14:10:36 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:09.248 14:10:36 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:06:09.248 14:10:36 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:09.248 14:10:36 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:09.248 14:10:36 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:09.248 14:10:36 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:09.248 1+0 records in 00:06:09.248 1+0 records out 00:06:09.248 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00023414 s, 17.5 MB/s 00:06:09.248 14:10:36 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:09.248 14:10:36 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:09.248 14:10:36 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:09.248 14:10:36 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:09.248 14:10:36 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:09.248 14:10:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:09.248 14:10:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:09.248 14:10:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:09.506 /dev/nbd1 00:06:09.506 14:10:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:09.506 14:10:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:09.506 14:10:36 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:06:09.506 14:10:36 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:09.506 14:10:36 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:09.506 14:10:36 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:09.506 14:10:36 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:06:09.506 14:10:36 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:09.506 14:10:36 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:09.506 14:10:36 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:09.506 14:10:36 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:09.506 1+0 records in 00:06:09.506 1+0 records out 00:06:09.506 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000267404 s, 15.3 MB/s 00:06:09.506 14:10:36 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:09.506 14:10:36 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:09.506 14:10:36 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:09.506 14:10:36 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:09.506 14:10:36 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:09.506 14:10:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:09.506 14:10:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:09.506 14:10:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:09.506 14:10:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.506 14:10:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:09.765 14:10:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:09.765 { 00:06:09.765 "bdev_name": "Malloc0", 00:06:09.765 "nbd_device": "/dev/nbd0" 00:06:09.765 }, 00:06:09.765 { 00:06:09.765 "bdev_name": "Malloc1", 00:06:09.765 "nbd_device": "/dev/nbd1" 00:06:09.765 } 00:06:09.765 ]' 00:06:09.765 14:10:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:09.765 { 00:06:09.765 "bdev_name": "Malloc0", 00:06:09.765 "nbd_device": "/dev/nbd0" 00:06:09.765 }, 00:06:09.765 { 00:06:09.765 "bdev_name": "Malloc1", 00:06:09.765 "nbd_device": "/dev/nbd1" 00:06:09.765 } 00:06:09.765 ]' 00:06:09.765 14:10:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:09.765 14:10:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:09.765 /dev/nbd1' 00:06:09.765 14:10:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:09.765 /dev/nbd1' 00:06:09.765 14:10:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:09.765 14:10:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:09.765 14:10:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:09.765 14:10:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:09.765 14:10:37 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:09.765 14:10:37 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:09.765 14:10:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.765 14:10:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:09.765 14:10:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:09.765 14:10:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:09.765 14:10:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:09.765 14:10:37 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:09.765 256+0 records in 00:06:09.765 256+0 records out 00:06:09.765 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00603813 s, 174 MB/s 00:06:09.765 14:10:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:09.765 14:10:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:09.765 256+0 records in 00:06:09.765 256+0 records out 00:06:09.765 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0200187 s, 52.4 MB/s 00:06:09.765 14:10:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:09.765 14:10:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:09.765 256+0 records in 00:06:09.765 256+0 records out 00:06:09.765 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0222553 s, 47.1 MB/s 00:06:09.765 14:10:37 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:09.765 14:10:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.765 14:10:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:09.765 14:10:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:09.765 14:10:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:09.765 14:10:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:09.765 14:10:37 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:09.765 14:10:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:09.765 14:10:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:09.765 14:10:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:09.765 14:10:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:09.765 14:10:37 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:09.765 14:10:37 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:09.765 14:10:37 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.765 14:10:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.765 14:10:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:09.765 14:10:37 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:09.765 14:10:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:09.765 14:10:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:10.025 14:10:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:10.025 14:10:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:10.025 14:10:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:10.025 14:10:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:10.025 14:10:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:10.025 14:10:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:10.025 14:10:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:10.025 14:10:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:10.025 14:10:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:10.025 14:10:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:10.284 14:10:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:10.284 14:10:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:10.284 14:10:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:10.284 14:10:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:10.284 14:10:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:10.284 14:10:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:10.284 14:10:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:10.284 14:10:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:10.284 14:10:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:10.284 14:10:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.284 14:10:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:10.284 14:10:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:10.284 14:10:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:10.284 14:10:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:10.284 14:10:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:10.284 14:10:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:10.284 14:10:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:10.284 14:10:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:10.284 14:10:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:10.284 14:10:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:10.284 14:10:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:10.284 14:10:37 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:10.284 14:10:37 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:10.284 14:10:37 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:10.541 14:10:38 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:11.473 [2024-11-06 14:10:38.880017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:11.473 [2024-11-06 14:10:38.977040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.473 [2024-11-06 14:10:38.977259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.473 [2024-11-06 14:10:39.106018] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:11.474 [2024-11-06 14:10:39.106099] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:14.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:14.002 14:10:41 event.app_repeat -- event/event.sh@38 -- # waitforlisten 60334 /var/tmp/spdk-nbd.sock 00:06:14.002 14:10:41 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 60334 ']' 00:06:14.002 14:10:41 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:14.002 14:10:41 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:14.002 14:10:41 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:14.002 14:10:41 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:14.002 14:10:41 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:14.002 14:10:41 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:14.002 14:10:41 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:06:14.002 14:10:41 event.app_repeat -- event/event.sh@39 -- # killprocess 60334 00:06:14.002 14:10:41 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 60334 ']' 00:06:14.002 14:10:41 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 60334 00:06:14.002 14:10:41 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:06:14.002 14:10:41 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:14.002 14:10:41 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60334 00:06:14.002 killing process with pid 60334 00:06:14.002 14:10:41 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:14.002 14:10:41 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:14.002 14:10:41 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60334' 00:06:14.002 14:10:41 event.app_repeat -- common/autotest_common.sh@971 -- # kill 60334 00:06:14.002 14:10:41 event.app_repeat -- common/autotest_common.sh@976 -- # wait 60334 00:06:14.568 spdk_app_start is called in Round 0. 00:06:14.568 Shutdown signal received, stop current app iteration 00:06:14.568 Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 reinitialization... 00:06:14.568 spdk_app_start is called in Round 1. 00:06:14.568 Shutdown signal received, stop current app iteration 00:06:14.568 Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 reinitialization... 00:06:14.568 spdk_app_start is called in Round 2. 00:06:14.568 Shutdown signal received, stop current app iteration 00:06:14.568 Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 reinitialization... 00:06:14.568 spdk_app_start is called in Round 3. 00:06:14.568 Shutdown signal received, stop current app iteration 00:06:14.568 14:10:41 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:14.568 14:10:41 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:14.568 00:06:14.568 real 0m17.844s 00:06:14.568 user 0m38.956s 00:06:14.568 sys 0m2.215s 00:06:14.568 ************************************ 00:06:14.568 END TEST app_repeat 00:06:14.568 ************************************ 00:06:14.568 14:10:41 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:14.568 14:10:41 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:14.568 14:10:41 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:14.568 14:10:41 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:14.568 14:10:41 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:14.568 14:10:41 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:14.568 14:10:41 event -- common/autotest_common.sh@10 -- # set +x 00:06:14.568 ************************************ 00:06:14.568 START TEST cpu_locks 00:06:14.568 ************************************ 00:06:14.568 14:10:41 event.cpu_locks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:14.568 * Looking for test storage... 00:06:14.568 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:14.568 14:10:42 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:14.568 14:10:42 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:14.568 14:10:42 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:06:14.568 14:10:42 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:14.568 14:10:42 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:14.568 14:10:42 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:14.568 14:10:42 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:14.568 14:10:42 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:14.568 14:10:42 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:14.568 14:10:42 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:14.568 14:10:42 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:14.568 14:10:42 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:14.568 14:10:42 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:14.568 14:10:42 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:14.568 14:10:42 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:14.568 14:10:42 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:14.568 14:10:42 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:14.568 14:10:42 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:14.568 14:10:42 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:14.568 14:10:42 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:14.568 14:10:42 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:14.568 14:10:42 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:14.568 14:10:42 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:14.568 14:10:42 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:14.568 14:10:42 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:14.568 14:10:42 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:14.568 14:10:42 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:14.568 14:10:42 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:14.568 14:10:42 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:14.568 14:10:42 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:14.568 14:10:42 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:14.568 14:10:42 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:14.568 14:10:42 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:14.568 14:10:42 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:14.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.568 --rc genhtml_branch_coverage=1 00:06:14.568 --rc genhtml_function_coverage=1 00:06:14.568 --rc genhtml_legend=1 00:06:14.568 --rc geninfo_all_blocks=1 00:06:14.568 --rc geninfo_unexecuted_blocks=1 00:06:14.568 00:06:14.568 ' 00:06:14.568 14:10:42 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:14.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.568 --rc genhtml_branch_coverage=1 00:06:14.568 --rc genhtml_function_coverage=1 00:06:14.568 --rc genhtml_legend=1 00:06:14.568 --rc geninfo_all_blocks=1 00:06:14.568 --rc geninfo_unexecuted_blocks=1 00:06:14.568 00:06:14.568 ' 00:06:14.568 14:10:42 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:14.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.568 --rc genhtml_branch_coverage=1 00:06:14.568 --rc genhtml_function_coverage=1 00:06:14.568 --rc genhtml_legend=1 00:06:14.568 --rc geninfo_all_blocks=1 00:06:14.568 --rc geninfo_unexecuted_blocks=1 00:06:14.568 00:06:14.568 ' 00:06:14.568 14:10:42 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:14.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.568 --rc genhtml_branch_coverage=1 00:06:14.568 --rc genhtml_function_coverage=1 00:06:14.568 --rc genhtml_legend=1 00:06:14.568 --rc geninfo_all_blocks=1 00:06:14.568 --rc geninfo_unexecuted_blocks=1 00:06:14.568 00:06:14.568 ' 00:06:14.568 14:10:42 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:14.568 14:10:42 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:14.568 14:10:42 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:14.568 14:10:42 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:14.568 14:10:42 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:14.568 14:10:42 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:14.568 14:10:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:14.568 ************************************ 00:06:14.568 START TEST default_locks 00:06:14.568 ************************************ 00:06:14.568 14:10:42 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:06:14.568 14:10:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60963 00:06:14.568 14:10:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60963 00:06:14.568 14:10:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:14.568 14:10:42 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 60963 ']' 00:06:14.568 14:10:42 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.568 14:10:42 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:14.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.568 14:10:42 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.568 14:10:42 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:14.568 14:10:42 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:14.568 [2024-11-06 14:10:42.197981] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:06:14.568 [2024-11-06 14:10:42.198109] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60963 ] 00:06:14.828 [2024-11-06 14:10:42.356397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.828 [2024-11-06 14:10:42.455923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.760 14:10:43 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:15.760 14:10:43 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:06:15.760 14:10:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60963 00:06:15.760 14:10:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60963 00:06:15.760 14:10:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:15.760 14:10:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60963 00:06:15.760 14:10:43 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 60963 ']' 00:06:15.760 14:10:43 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 60963 00:06:15.760 14:10:43 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:06:15.760 14:10:43 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:15.760 14:10:43 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60963 00:06:15.760 14:10:43 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:15.760 14:10:43 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:15.760 killing process with pid 60963 00:06:15.760 14:10:43 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60963' 00:06:15.760 14:10:43 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 60963 00:06:15.760 14:10:43 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 60963 00:06:17.658 14:10:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60963 00:06:17.658 14:10:44 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:17.658 14:10:44 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 60963 00:06:17.658 14:10:44 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:17.658 14:10:44 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:17.658 14:10:44 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:17.658 14:10:44 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:17.658 14:10:44 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 60963 00:06:17.658 14:10:44 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 60963 ']' 00:06:17.658 14:10:44 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.658 14:10:44 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:17.658 14:10:44 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.658 14:10:44 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:17.658 14:10:44 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:17.658 ERROR: process (pid: 60963) is no longer running 00:06:17.658 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (60963) - No such process 00:06:17.658 14:10:44 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:17.658 14:10:44 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:06:17.658 14:10:44 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:17.658 14:10:44 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:17.658 14:10:44 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:17.658 14:10:44 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:17.658 14:10:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:17.658 14:10:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:17.658 14:10:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:17.658 14:10:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:17.658 00:06:17.658 real 0m2.651s 00:06:17.658 user 0m2.668s 00:06:17.658 sys 0m0.430s 00:06:17.658 14:10:44 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:17.658 14:10:44 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:17.658 ************************************ 00:06:17.658 END TEST default_locks 00:06:17.658 ************************************ 00:06:17.658 14:10:44 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:17.658 14:10:44 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:17.658 14:10:44 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:17.658 14:10:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:17.658 ************************************ 00:06:17.658 START TEST default_locks_via_rpc 00:06:17.658 ************************************ 00:06:17.658 14:10:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:06:17.658 14:10:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=61039 00:06:17.658 14:10:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 61039 00:06:17.658 14:10:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 61039 ']' 00:06:17.658 14:10:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.658 14:10:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:17.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.658 14:10:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.658 14:10:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:17.658 14:10:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:17.658 14:10:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.658 [2024-11-06 14:10:44.879884] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:06:17.659 [2024-11-06 14:10:44.880022] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61039 ] 00:06:17.659 [2024-11-06 14:10:45.035634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.659 [2024-11-06 14:10:45.134624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.225 14:10:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:18.225 14:10:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:18.225 14:10:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:18.225 14:10:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.225 14:10:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.225 14:10:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:18.225 14:10:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:18.225 14:10:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:18.225 14:10:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:18.225 14:10:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:18.225 14:10:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:18.225 14:10:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.225 14:10:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.225 14:10:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:18.225 14:10:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 61039 00:06:18.225 14:10:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:18.225 14:10:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 61039 00:06:18.482 14:10:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 61039 00:06:18.482 14:10:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 61039 ']' 00:06:18.482 14:10:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 61039 00:06:18.482 14:10:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:06:18.482 14:10:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:18.482 14:10:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61039 00:06:18.483 14:10:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:18.483 14:10:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:18.483 14:10:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61039' 00:06:18.483 killing process with pid 61039 00:06:18.483 14:10:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 61039 00:06:18.483 14:10:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 61039 00:06:19.857 00:06:19.857 real 0m2.553s 00:06:19.857 user 0m2.566s 00:06:19.857 sys 0m0.446s 00:06:19.857 14:10:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:19.857 14:10:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.857 ************************************ 00:06:19.857 END TEST default_locks_via_rpc 00:06:19.857 ************************************ 00:06:19.857 14:10:47 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:19.857 14:10:47 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:19.857 14:10:47 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:19.857 14:10:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:19.857 ************************************ 00:06:19.857 START TEST non_locking_app_on_locked_coremask 00:06:19.857 ************************************ 00:06:19.857 14:10:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:06:19.857 14:10:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=61109 00:06:19.857 14:10:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 61109 /var/tmp/spdk.sock 00:06:19.857 14:10:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 61109 ']' 00:06:19.857 14:10:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.857 14:10:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:19.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.857 14:10:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.857 14:10:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:19.857 14:10:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:19.857 14:10:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.857 [2024-11-06 14:10:47.477355] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:06:19.857 [2024-11-06 14:10:47.477501] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61109 ] 00:06:20.114 [2024-11-06 14:10:47.634317] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.114 [2024-11-06 14:10:47.733537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.701 14:10:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:20.701 14:10:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:20.701 14:10:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=61137 00:06:20.701 14:10:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 61137 /var/tmp/spdk2.sock 00:06:20.701 14:10:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 61137 ']' 00:06:20.701 14:10:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:20.701 14:10:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:20.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:20.701 14:10:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:20.701 14:10:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:20.701 14:10:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:20.701 14:10:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:20.959 [2024-11-06 14:10:48.406303] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:06:20.959 [2024-11-06 14:10:48.406753] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61137 ] 00:06:20.959 [2024-11-06 14:10:48.576984] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:20.959 [2024-11-06 14:10:48.577038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.218 [2024-11-06 14:10:48.819613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.623 14:10:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:22.623 14:10:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:22.623 14:10:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 61109 00:06:22.623 14:10:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61109 00:06:22.623 14:10:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:22.881 14:10:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 61109 00:06:22.881 14:10:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 61109 ']' 00:06:22.881 14:10:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 61109 00:06:22.881 14:10:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:22.881 14:10:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:22.881 14:10:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61109 00:06:22.881 14:10:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:22.881 14:10:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:22.881 killing process with pid 61109 00:06:22.881 14:10:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61109' 00:06:22.881 14:10:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 61109 00:06:22.881 14:10:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 61109 00:06:25.407 14:10:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 61137 00:06:25.407 14:10:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 61137 ']' 00:06:25.407 14:10:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 61137 00:06:25.407 14:10:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:25.407 14:10:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:25.407 14:10:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61137 00:06:25.664 14:10:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:25.664 killing process with pid 61137 00:06:25.664 14:10:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:25.664 14:10:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61137' 00:06:25.664 14:10:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 61137 00:06:25.664 14:10:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 61137 00:06:27.071 00:06:27.072 real 0m6.855s 00:06:27.072 user 0m7.082s 00:06:27.072 sys 0m0.905s 00:06:27.072 14:10:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:27.072 14:10:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:27.072 ************************************ 00:06:27.072 END TEST non_locking_app_on_locked_coremask 00:06:27.072 ************************************ 00:06:27.072 14:10:54 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:27.072 14:10:54 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:27.072 14:10:54 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:27.072 14:10:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:27.072 ************************************ 00:06:27.072 START TEST locking_app_on_unlocked_coremask 00:06:27.072 ************************************ 00:06:27.072 14:10:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:06:27.072 14:10:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=61251 00:06:27.072 14:10:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 61251 /var/tmp/spdk.sock 00:06:27.072 14:10:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 61251 ']' 00:06:27.072 14:10:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.072 14:10:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:27.072 14:10:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:27.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.072 14:10:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.072 14:10:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:27.072 14:10:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:27.072 [2024-11-06 14:10:54.368676] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:06:27.072 [2024-11-06 14:10:54.368799] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61251 ] 00:06:27.072 [2024-11-06 14:10:54.524815] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:27.072 [2024-11-06 14:10:54.524858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.072 [2024-11-06 14:10:54.608578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:27.636 14:10:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:27.636 14:10:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:27.636 14:10:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=61279 00:06:27.636 14:10:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 61279 /var/tmp/spdk2.sock 00:06:27.636 14:10:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 61279 ']' 00:06:27.636 14:10:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:27.636 14:10:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:27.636 14:10:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:27.636 14:10:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:27.636 14:10:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:27.636 14:10:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:27.893 [2024-11-06 14:10:55.287963] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:06:27.893 [2024-11-06 14:10:55.288084] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61279 ] 00:06:27.893 [2024-11-06 14:10:55.449067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.151 [2024-11-06 14:10:55.621536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.084 14:10:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:29.084 14:10:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:29.084 14:10:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 61279 00:06:29.084 14:10:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61279 00:06:29.084 14:10:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:29.649 14:10:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 61251 00:06:29.649 14:10:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 61251 ']' 00:06:29.649 14:10:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 61251 00:06:29.649 14:10:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:29.649 14:10:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:29.650 14:10:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61251 00:06:29.650 14:10:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:29.650 14:10:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:29.650 killing process with pid 61251 00:06:29.650 14:10:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61251' 00:06:29.650 14:10:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 61251 00:06:29.650 14:10:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 61251 00:06:32.173 14:10:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 61279 00:06:32.173 14:10:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 61279 ']' 00:06:32.173 14:10:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 61279 00:06:32.173 14:10:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:32.173 14:10:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:32.173 14:10:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61279 00:06:32.173 14:10:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:32.173 14:10:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:32.173 killing process with pid 61279 00:06:32.173 14:10:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61279' 00:06:32.173 14:10:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 61279 00:06:32.173 14:10:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 61279 00:06:33.105 00:06:33.105 real 0m6.435s 00:06:33.105 user 0m6.717s 00:06:33.105 sys 0m0.872s 00:06:33.105 14:11:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:33.105 14:11:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:33.105 ************************************ 00:06:33.105 END TEST locking_app_on_unlocked_coremask 00:06:33.105 ************************************ 00:06:33.366 14:11:00 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:33.366 14:11:00 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:33.366 14:11:00 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:33.366 14:11:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:33.366 ************************************ 00:06:33.366 START TEST locking_app_on_locked_coremask 00:06:33.366 ************************************ 00:06:33.366 14:11:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:06:33.366 14:11:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=61382 00:06:33.366 14:11:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 61382 /var/tmp/spdk.sock 00:06:33.366 14:11:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 61382 ']' 00:06:33.366 14:11:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.366 14:11:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:33.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.366 14:11:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.366 14:11:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:33.366 14:11:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:33.366 14:11:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:33.366 [2024-11-06 14:11:00.865921] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:06:33.366 [2024-11-06 14:11:00.866062] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61382 ] 00:06:33.623 [2024-11-06 14:11:01.021983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.623 [2024-11-06 14:11:01.107893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.199 14:11:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:34.199 14:11:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:34.199 14:11:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=61410 00:06:34.199 14:11:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 61410 /var/tmp/spdk2.sock 00:06:34.199 14:11:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:34.199 14:11:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:34.199 14:11:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 61410 /var/tmp/spdk2.sock 00:06:34.199 14:11:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:34.199 14:11:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:34.199 14:11:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:34.199 14:11:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:34.199 14:11:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 61410 /var/tmp/spdk2.sock 00:06:34.199 14:11:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 61410 ']' 00:06:34.199 14:11:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:34.199 14:11:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:34.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:34.199 14:11:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:34.199 14:11:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:34.199 14:11:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:34.199 [2024-11-06 14:11:01.748262] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:06:34.199 [2024-11-06 14:11:01.748390] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61410 ] 00:06:34.456 [2024-11-06 14:11:01.912002] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 61382 has claimed it. 00:06:34.456 [2024-11-06 14:11:01.912056] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:35.022 ERROR: process (pid: 61410) is no longer running 00:06:35.022 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (61410) - No such process 00:06:35.022 14:11:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:35.022 14:11:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:06:35.022 14:11:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:35.022 14:11:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:35.022 14:11:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:35.022 14:11:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:35.022 14:11:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 61382 00:06:35.022 14:11:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61382 00:06:35.022 14:11:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:35.022 14:11:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 61382 00:06:35.022 14:11:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 61382 ']' 00:06:35.022 14:11:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 61382 00:06:35.022 14:11:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:35.022 14:11:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:35.022 14:11:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61382 00:06:35.022 14:11:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:35.022 14:11:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:35.022 killing process with pid 61382 00:06:35.022 14:11:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61382' 00:06:35.022 14:11:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 61382 00:06:35.022 14:11:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 61382 00:06:36.395 00:06:36.395 real 0m3.027s 00:06:36.395 user 0m3.200s 00:06:36.395 sys 0m0.566s 00:06:36.395 14:11:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:36.395 ************************************ 00:06:36.395 END TEST locking_app_on_locked_coremask 00:06:36.395 ************************************ 00:06:36.395 14:11:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:36.395 14:11:03 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:36.395 14:11:03 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:36.395 14:11:03 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:36.395 14:11:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:36.395 ************************************ 00:06:36.395 START TEST locking_overlapped_coremask 00:06:36.396 ************************************ 00:06:36.396 14:11:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:06:36.396 14:11:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=61469 00:06:36.396 14:11:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 61469 /var/tmp/spdk.sock 00:06:36.396 14:11:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 61469 ']' 00:06:36.396 14:11:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.396 14:11:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:36.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.396 14:11:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.396 14:11:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:36.396 14:11:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:36.396 14:11:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:36.396 [2024-11-06 14:11:03.958546] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:06:36.396 [2024-11-06 14:11:03.958678] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61469 ] 00:06:36.654 [2024-11-06 14:11:04.115503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:36.654 [2024-11-06 14:11:04.204734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:36.654 [2024-11-06 14:11:04.204982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.654 [2024-11-06 14:11:04.205008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:37.220 14:11:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:37.220 14:11:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:37.220 14:11:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=61499 00:06:37.220 14:11:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 61499 /var/tmp/spdk2.sock 00:06:37.220 14:11:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:37.220 14:11:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:37.220 14:11:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 61499 /var/tmp/spdk2.sock 00:06:37.221 14:11:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:37.221 14:11:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:37.221 14:11:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:37.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:37.221 14:11:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:37.221 14:11:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 61499 /var/tmp/spdk2.sock 00:06:37.221 14:11:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 61499 ']' 00:06:37.221 14:11:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:37.221 14:11:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:37.221 14:11:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:37.221 14:11:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:37.221 14:11:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:37.478 [2024-11-06 14:11:04.880408] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:06:37.478 [2024-11-06 14:11:04.880528] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61499 ] 00:06:37.478 [2024-11-06 14:11:05.052037] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61469 has claimed it. 00:06:37.478 [2024-11-06 14:11:05.052097] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:38.043 ERROR: process (pid: 61499) is no longer running 00:06:38.044 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (61499) - No such process 00:06:38.044 14:11:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:38.044 14:11:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:06:38.044 14:11:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:38.044 14:11:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:38.044 14:11:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:38.044 14:11:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:38.044 14:11:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:38.044 14:11:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:38.044 14:11:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:38.044 14:11:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:38.044 14:11:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 61469 00:06:38.044 14:11:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 61469 ']' 00:06:38.044 14:11:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 61469 00:06:38.044 14:11:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:06:38.044 14:11:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:38.044 14:11:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61469 00:06:38.044 killing process with pid 61469 00:06:38.044 14:11:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:38.044 14:11:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:38.044 14:11:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61469' 00:06:38.044 14:11:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 61469 00:06:38.044 14:11:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 61469 00:06:39.440 ************************************ 00:06:39.440 END TEST locking_overlapped_coremask 00:06:39.440 ************************************ 00:06:39.440 00:06:39.440 real 0m2.944s 00:06:39.440 user 0m8.027s 00:06:39.440 sys 0m0.462s 00:06:39.440 14:11:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:39.440 14:11:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:39.440 14:11:06 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:39.440 14:11:06 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:39.440 14:11:06 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:39.440 14:11:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:39.440 ************************************ 00:06:39.440 START TEST locking_overlapped_coremask_via_rpc 00:06:39.440 ************************************ 00:06:39.440 14:11:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:06:39.440 14:11:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=61558 00:06:39.440 14:11:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 61558 /var/tmp/spdk.sock 00:06:39.440 14:11:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:39.440 14:11:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 61558 ']' 00:06:39.440 14:11:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.440 14:11:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:39.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.440 14:11:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.440 14:11:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:39.441 14:11:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.441 [2024-11-06 14:11:06.997063] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:06:39.441 [2024-11-06 14:11:06.997460] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61558 ] 00:06:39.700 [2024-11-06 14:11:07.157184] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:39.700 [2024-11-06 14:11:07.157250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:39.700 [2024-11-06 14:11:07.265673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.700 [2024-11-06 14:11:07.266444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:39.700 [2024-11-06 14:11:07.266552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.272 14:11:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:40.272 14:11:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:40.272 14:11:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:40.272 14:11:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=61588 00:06:40.272 14:11:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 61588 /var/tmp/spdk2.sock 00:06:40.272 14:11:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 61588 ']' 00:06:40.272 14:11:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:40.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:40.272 14:11:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:40.272 14:11:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:40.272 14:11:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:40.272 14:11:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.530 [2024-11-06 14:11:08.002078] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:06:40.530 [2024-11-06 14:11:08.002496] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61588 ] 00:06:40.788 [2024-11-06 14:11:08.184320] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:40.788 [2024-11-06 14:11:08.184396] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:40.788 [2024-11-06 14:11:08.404268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:40.788 [2024-11-06 14:11:08.404778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:40.788 [2024-11-06 14:11:08.404794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:42.161 14:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:42.161 14:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:42.161 14:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:42.161 14:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.161 14:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.161 14:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.161 14:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:42.161 14:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:42.161 14:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:42.161 14:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:42.161 14:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:42.161 14:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:42.161 14:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:42.161 14:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:42.161 14:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.161 14:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.161 [2024-11-06 14:11:09.618132] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61558 has claimed it. 00:06:42.161 2024/11/06 14:11:09 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:06:42.161 request: 00:06:42.161 { 00:06:42.161 "method": "framework_enable_cpumask_locks", 00:06:42.161 "params": {} 00:06:42.161 } 00:06:42.161 Got JSON-RPC error response 00:06:42.161 GoRPCClient: error on JSON-RPC call 00:06:42.161 14:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:42.161 14:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:42.161 14:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:42.161 14:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:42.161 14:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:42.161 14:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 61558 /var/tmp/spdk.sock 00:06:42.161 14:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 61558 ']' 00:06:42.161 14:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.161 14:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:42.161 14:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.161 14:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:42.161 14:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.419 14:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:42.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:42.419 14:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:42.419 14:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 61588 /var/tmp/spdk2.sock 00:06:42.419 14:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 61588 ']' 00:06:42.419 14:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:42.419 14:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:42.419 14:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:42.419 14:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:42.419 14:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.677 ************************************ 00:06:42.677 END TEST locking_overlapped_coremask_via_rpc 00:06:42.677 ************************************ 00:06:42.677 14:11:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:42.677 14:11:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:42.677 14:11:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:42.677 14:11:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:42.677 14:11:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:42.677 14:11:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:42.677 00:06:42.677 real 0m3.182s 00:06:42.677 user 0m1.118s 00:06:42.677 sys 0m0.167s 00:06:42.677 14:11:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:42.677 14:11:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.677 14:11:10 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:42.677 14:11:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61558 ]] 00:06:42.677 14:11:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61558 00:06:42.677 14:11:10 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 61558 ']' 00:06:42.677 14:11:10 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 61558 00:06:42.677 14:11:10 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:06:42.677 14:11:10 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:42.677 14:11:10 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61558 00:06:42.677 killing process with pid 61558 00:06:42.677 14:11:10 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:42.677 14:11:10 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:42.677 14:11:10 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61558' 00:06:42.677 14:11:10 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 61558 00:06:42.677 14:11:10 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 61558 00:06:44.071 14:11:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61588 ]] 00:06:44.071 14:11:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61588 00:06:44.071 14:11:11 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 61588 ']' 00:06:44.071 14:11:11 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 61588 00:06:44.071 14:11:11 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:06:44.071 14:11:11 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:44.071 14:11:11 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61588 00:06:44.071 killing process with pid 61588 00:06:44.071 14:11:11 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:06:44.071 14:11:11 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:06:44.071 14:11:11 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61588' 00:06:44.071 14:11:11 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 61588 00:06:44.071 14:11:11 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 61588 00:06:45.444 14:11:13 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:45.444 14:11:13 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:45.444 14:11:13 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61558 ]] 00:06:45.444 14:11:13 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61558 00:06:45.444 14:11:13 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 61558 ']' 00:06:45.444 14:11:13 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 61558 00:06:45.444 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (61558) - No such process 00:06:45.444 Process with pid 61558 is not found 00:06:45.444 Process with pid 61588 is not found 00:06:45.444 14:11:13 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 61558 is not found' 00:06:45.444 14:11:13 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61588 ]] 00:06:45.444 14:11:13 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61588 00:06:45.444 14:11:13 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 61588 ']' 00:06:45.444 14:11:13 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 61588 00:06:45.444 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (61588) - No such process 00:06:45.444 14:11:13 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 61588 is not found' 00:06:45.444 14:11:13 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:45.444 ************************************ 00:06:45.444 END TEST cpu_locks 00:06:45.444 ************************************ 00:06:45.444 00:06:45.444 real 0m31.099s 00:06:45.444 user 0m54.212s 00:06:45.444 sys 0m4.736s 00:06:45.444 14:11:13 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:45.444 14:11:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:45.702 ************************************ 00:06:45.702 END TEST event 00:06:45.702 ************************************ 00:06:45.702 00:06:45.702 real 0m56.922s 00:06:45.702 user 1m46.358s 00:06:45.702 sys 0m7.755s 00:06:45.702 14:11:13 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:45.702 14:11:13 event -- common/autotest_common.sh@10 -- # set +x 00:06:45.702 14:11:13 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:45.702 14:11:13 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:45.702 14:11:13 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:45.702 14:11:13 -- common/autotest_common.sh@10 -- # set +x 00:06:45.702 ************************************ 00:06:45.702 START TEST thread 00:06:45.702 ************************************ 00:06:45.702 14:11:13 thread -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:45.702 * Looking for test storage... 00:06:45.702 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:45.702 14:11:13 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:45.702 14:11:13 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:45.702 14:11:13 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:06:45.702 14:11:13 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:45.702 14:11:13 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:45.702 14:11:13 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:45.702 14:11:13 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:45.702 14:11:13 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:45.702 14:11:13 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:45.702 14:11:13 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:45.702 14:11:13 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:45.702 14:11:13 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:45.702 14:11:13 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:45.702 14:11:13 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:45.702 14:11:13 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:45.702 14:11:13 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:45.702 14:11:13 thread -- scripts/common.sh@345 -- # : 1 00:06:45.702 14:11:13 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:45.702 14:11:13 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:45.702 14:11:13 thread -- scripts/common.sh@365 -- # decimal 1 00:06:45.702 14:11:13 thread -- scripts/common.sh@353 -- # local d=1 00:06:45.702 14:11:13 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:45.702 14:11:13 thread -- scripts/common.sh@355 -- # echo 1 00:06:45.702 14:11:13 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:45.702 14:11:13 thread -- scripts/common.sh@366 -- # decimal 2 00:06:45.702 14:11:13 thread -- scripts/common.sh@353 -- # local d=2 00:06:45.702 14:11:13 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:45.702 14:11:13 thread -- scripts/common.sh@355 -- # echo 2 00:06:45.702 14:11:13 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:45.702 14:11:13 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:45.702 14:11:13 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:45.702 14:11:13 thread -- scripts/common.sh@368 -- # return 0 00:06:45.702 14:11:13 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:45.702 14:11:13 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:45.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.703 --rc genhtml_branch_coverage=1 00:06:45.703 --rc genhtml_function_coverage=1 00:06:45.703 --rc genhtml_legend=1 00:06:45.703 --rc geninfo_all_blocks=1 00:06:45.703 --rc geninfo_unexecuted_blocks=1 00:06:45.703 00:06:45.703 ' 00:06:45.703 14:11:13 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:45.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.703 --rc genhtml_branch_coverage=1 00:06:45.703 --rc genhtml_function_coverage=1 00:06:45.703 --rc genhtml_legend=1 00:06:45.703 --rc geninfo_all_blocks=1 00:06:45.703 --rc geninfo_unexecuted_blocks=1 00:06:45.703 00:06:45.703 ' 00:06:45.703 14:11:13 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:45.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.703 --rc genhtml_branch_coverage=1 00:06:45.703 --rc genhtml_function_coverage=1 00:06:45.703 --rc genhtml_legend=1 00:06:45.703 --rc geninfo_all_blocks=1 00:06:45.703 --rc geninfo_unexecuted_blocks=1 00:06:45.703 00:06:45.703 ' 00:06:45.703 14:11:13 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:45.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.703 --rc genhtml_branch_coverage=1 00:06:45.703 --rc genhtml_function_coverage=1 00:06:45.703 --rc genhtml_legend=1 00:06:45.703 --rc geninfo_all_blocks=1 00:06:45.703 --rc geninfo_unexecuted_blocks=1 00:06:45.703 00:06:45.703 ' 00:06:45.703 14:11:13 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:45.703 14:11:13 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:06:45.703 14:11:13 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:45.703 14:11:13 thread -- common/autotest_common.sh@10 -- # set +x 00:06:45.703 ************************************ 00:06:45.703 START TEST thread_poller_perf 00:06:45.703 ************************************ 00:06:45.703 14:11:13 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:45.703 [2024-11-06 14:11:13.296435] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:06:45.703 [2024-11-06 14:11:13.296726] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61778 ] 00:06:45.961 [2024-11-06 14:11:13.453303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.961 [2024-11-06 14:11:13.537942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.961 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:47.346 [2024-11-06T14:11:14.981Z] ====================================== 00:06:47.346 [2024-11-06T14:11:14.981Z] busy:2606554734 (cyc) 00:06:47.346 [2024-11-06T14:11:14.981Z] total_run_count: 383000 00:06:47.346 [2024-11-06T14:11:14.981Z] tsc_hz: 2600000000 (cyc) 00:06:47.346 [2024-11-06T14:11:14.981Z] ====================================== 00:06:47.346 [2024-11-06T14:11:14.981Z] poller_cost: 6805 (cyc), 2617 (nsec) 00:06:47.346 00:06:47.346 real 0m1.404s 00:06:47.346 user 0m1.230s 00:06:47.346 sys 0m0.067s 00:06:47.346 ************************************ 00:06:47.346 END TEST thread_poller_perf 00:06:47.346 ************************************ 00:06:47.346 14:11:14 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:47.346 14:11:14 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:47.346 14:11:14 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:47.346 14:11:14 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:06:47.346 14:11:14 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:47.346 14:11:14 thread -- common/autotest_common.sh@10 -- # set +x 00:06:47.346 ************************************ 00:06:47.346 START TEST thread_poller_perf 00:06:47.346 ************************************ 00:06:47.346 14:11:14 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:47.346 [2024-11-06 14:11:14.747512] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:06:47.346 [2024-11-06 14:11:14.747639] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61809 ] 00:06:47.346 [2024-11-06 14:11:14.909684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.603 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:47.603 [2024-11-06 14:11:15.008931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.562 [2024-11-06T14:11:16.197Z] ====================================== 00:06:48.562 [2024-11-06T14:11:16.197Z] busy:2603384776 (cyc) 00:06:48.562 [2024-11-06T14:11:16.197Z] total_run_count: 3961000 00:06:48.562 [2024-11-06T14:11:16.197Z] tsc_hz: 2600000000 (cyc) 00:06:48.562 [2024-11-06T14:11:16.197Z] ====================================== 00:06:48.562 [2024-11-06T14:11:16.197Z] poller_cost: 657 (cyc), 252 (nsec) 00:06:48.562 00:06:48.562 real 0m1.445s 00:06:48.562 user 0m1.277s 00:06:48.562 sys 0m0.061s 00:06:48.562 14:11:16 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:48.562 ************************************ 00:06:48.562 END TEST thread_poller_perf 00:06:48.562 ************************************ 00:06:48.562 14:11:16 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:48.819 14:11:16 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:48.820 ************************************ 00:06:48.820 END TEST thread 00:06:48.820 ************************************ 00:06:48.820 00:06:48.820 real 0m3.068s 00:06:48.820 user 0m2.607s 00:06:48.820 sys 0m0.246s 00:06:48.820 14:11:16 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:48.820 14:11:16 thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.820 14:11:16 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:48.820 14:11:16 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:48.820 14:11:16 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:48.820 14:11:16 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:48.820 14:11:16 -- common/autotest_common.sh@10 -- # set +x 00:06:48.820 ************************************ 00:06:48.820 START TEST app_cmdline 00:06:48.820 ************************************ 00:06:48.820 14:11:16 app_cmdline -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:48.820 * Looking for test storage... 00:06:48.820 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:48.820 14:11:16 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:48.820 14:11:16 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:06:48.820 14:11:16 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:48.820 14:11:16 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:48.820 14:11:16 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:48.820 14:11:16 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:48.820 14:11:16 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:48.820 14:11:16 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:48.820 14:11:16 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:48.820 14:11:16 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:48.820 14:11:16 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:48.820 14:11:16 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:48.820 14:11:16 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:48.820 14:11:16 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:48.820 14:11:16 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:48.820 14:11:16 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:48.820 14:11:16 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:48.820 14:11:16 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:48.820 14:11:16 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:48.820 14:11:16 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:48.820 14:11:16 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:48.820 14:11:16 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:48.820 14:11:16 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:48.820 14:11:16 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:48.820 14:11:16 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:48.820 14:11:16 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:48.820 14:11:16 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:48.820 14:11:16 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:48.820 14:11:16 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:48.820 14:11:16 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:48.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.820 14:11:16 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:48.820 14:11:16 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:48.820 14:11:16 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:48.820 14:11:16 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:48.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.820 --rc genhtml_branch_coverage=1 00:06:48.820 --rc genhtml_function_coverage=1 00:06:48.820 --rc genhtml_legend=1 00:06:48.820 --rc geninfo_all_blocks=1 00:06:48.820 --rc geninfo_unexecuted_blocks=1 00:06:48.820 00:06:48.820 ' 00:06:48.820 14:11:16 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:48.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.820 --rc genhtml_branch_coverage=1 00:06:48.820 --rc genhtml_function_coverage=1 00:06:48.820 --rc genhtml_legend=1 00:06:48.820 --rc geninfo_all_blocks=1 00:06:48.820 --rc geninfo_unexecuted_blocks=1 00:06:48.820 00:06:48.820 ' 00:06:48.820 14:11:16 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:48.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.820 --rc genhtml_branch_coverage=1 00:06:48.820 --rc genhtml_function_coverage=1 00:06:48.820 --rc genhtml_legend=1 00:06:48.820 --rc geninfo_all_blocks=1 00:06:48.820 --rc geninfo_unexecuted_blocks=1 00:06:48.820 00:06:48.820 ' 00:06:48.820 14:11:16 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:48.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.820 --rc genhtml_branch_coverage=1 00:06:48.820 --rc genhtml_function_coverage=1 00:06:48.820 --rc genhtml_legend=1 00:06:48.820 --rc geninfo_all_blocks=1 00:06:48.820 --rc geninfo_unexecuted_blocks=1 00:06:48.820 00:06:48.820 ' 00:06:48.820 14:11:16 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:48.820 14:11:16 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=61898 00:06:48.820 14:11:16 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 61898 00:06:48.820 14:11:16 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 61898 ']' 00:06:48.820 14:11:16 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.820 14:11:16 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:48.820 14:11:16 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.820 14:11:16 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:48.820 14:11:16 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:48.820 14:11:16 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:48.820 [2024-11-06 14:11:16.450017] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:06:48.820 [2024-11-06 14:11:16.450147] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61898 ] 00:06:49.078 [2024-11-06 14:11:16.609798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.078 [2024-11-06 14:11:16.709852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.031 14:11:17 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:50.031 14:11:17 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:06:50.031 14:11:17 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:50.031 { 00:06:50.031 "fields": { 00:06:50.031 "commit": "d1c46ed8e", 00:06:50.031 "major": 25, 00:06:50.031 "minor": 1, 00:06:50.031 "patch": 0, 00:06:50.031 "suffix": "-pre" 00:06:50.031 }, 00:06:50.031 "version": "SPDK v25.01-pre git sha1 d1c46ed8e" 00:06:50.031 } 00:06:50.031 14:11:17 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:50.031 14:11:17 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:50.031 14:11:17 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:50.031 14:11:17 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:50.031 14:11:17 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:50.031 14:11:17 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.031 14:11:17 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:50.031 14:11:17 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:50.031 14:11:17 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:50.031 14:11:17 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.031 14:11:17 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:50.031 14:11:17 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:50.031 14:11:17 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:50.031 14:11:17 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:50.031 14:11:17 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:50.031 14:11:17 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:50.031 14:11:17 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:50.031 14:11:17 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:50.031 14:11:17 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:50.031 14:11:17 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:50.031 14:11:17 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:50.031 14:11:17 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:50.031 14:11:17 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:50.031 14:11:17 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:50.289 2024/11/06 14:11:17 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:06:50.289 request: 00:06:50.289 { 00:06:50.289 "method": "env_dpdk_get_mem_stats", 00:06:50.289 "params": {} 00:06:50.289 } 00:06:50.289 Got JSON-RPC error response 00:06:50.289 GoRPCClient: error on JSON-RPC call 00:06:50.289 14:11:17 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:50.289 14:11:17 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:50.289 14:11:17 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:50.289 14:11:17 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:50.289 14:11:17 app_cmdline -- app/cmdline.sh@1 -- # killprocess 61898 00:06:50.289 14:11:17 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 61898 ']' 00:06:50.289 14:11:17 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 61898 00:06:50.289 14:11:17 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:06:50.289 14:11:17 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:50.289 14:11:17 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61898 00:06:50.289 killing process with pid 61898 00:06:50.289 14:11:17 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:50.289 14:11:17 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:50.289 14:11:17 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61898' 00:06:50.289 14:11:17 app_cmdline -- common/autotest_common.sh@971 -- # kill 61898 00:06:50.289 14:11:17 app_cmdline -- common/autotest_common.sh@976 -- # wait 61898 00:06:51.676 00:06:51.676 real 0m3.062s 00:06:51.676 user 0m3.297s 00:06:51.676 sys 0m0.475s 00:06:51.676 14:11:19 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:51.676 14:11:19 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:51.676 ************************************ 00:06:51.676 END TEST app_cmdline 00:06:51.676 ************************************ 00:06:51.934 14:11:19 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:51.934 14:11:19 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:51.934 14:11:19 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:51.934 14:11:19 -- common/autotest_common.sh@10 -- # set +x 00:06:51.934 ************************************ 00:06:51.934 START TEST version 00:06:51.934 ************************************ 00:06:51.934 14:11:19 version -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:51.935 * Looking for test storage... 00:06:51.935 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:51.935 14:11:19 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:51.935 14:11:19 version -- common/autotest_common.sh@1691 -- # lcov --version 00:06:51.935 14:11:19 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:51.935 14:11:19 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:51.935 14:11:19 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:51.935 14:11:19 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:51.935 14:11:19 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:51.935 14:11:19 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:51.935 14:11:19 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:51.935 14:11:19 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:51.935 14:11:19 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:51.935 14:11:19 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:51.935 14:11:19 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:51.935 14:11:19 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:51.935 14:11:19 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:51.935 14:11:19 version -- scripts/common.sh@344 -- # case "$op" in 00:06:51.935 14:11:19 version -- scripts/common.sh@345 -- # : 1 00:06:51.935 14:11:19 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:51.935 14:11:19 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:51.935 14:11:19 version -- scripts/common.sh@365 -- # decimal 1 00:06:51.935 14:11:19 version -- scripts/common.sh@353 -- # local d=1 00:06:51.935 14:11:19 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:51.935 14:11:19 version -- scripts/common.sh@355 -- # echo 1 00:06:51.935 14:11:19 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:51.935 14:11:19 version -- scripts/common.sh@366 -- # decimal 2 00:06:51.935 14:11:19 version -- scripts/common.sh@353 -- # local d=2 00:06:51.935 14:11:19 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:51.935 14:11:19 version -- scripts/common.sh@355 -- # echo 2 00:06:51.935 14:11:19 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:51.935 14:11:19 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:51.935 14:11:19 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:51.935 14:11:19 version -- scripts/common.sh@368 -- # return 0 00:06:51.935 14:11:19 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:51.935 14:11:19 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:51.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.935 --rc genhtml_branch_coverage=1 00:06:51.935 --rc genhtml_function_coverage=1 00:06:51.935 --rc genhtml_legend=1 00:06:51.935 --rc geninfo_all_blocks=1 00:06:51.935 --rc geninfo_unexecuted_blocks=1 00:06:51.935 00:06:51.935 ' 00:06:51.935 14:11:19 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:51.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.935 --rc genhtml_branch_coverage=1 00:06:51.935 --rc genhtml_function_coverage=1 00:06:51.935 --rc genhtml_legend=1 00:06:51.935 --rc geninfo_all_blocks=1 00:06:51.935 --rc geninfo_unexecuted_blocks=1 00:06:51.935 00:06:51.935 ' 00:06:51.935 14:11:19 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:51.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.935 --rc genhtml_branch_coverage=1 00:06:51.935 --rc genhtml_function_coverage=1 00:06:51.935 --rc genhtml_legend=1 00:06:51.935 --rc geninfo_all_blocks=1 00:06:51.935 --rc geninfo_unexecuted_blocks=1 00:06:51.935 00:06:51.935 ' 00:06:51.935 14:11:19 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:51.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.935 --rc genhtml_branch_coverage=1 00:06:51.935 --rc genhtml_function_coverage=1 00:06:51.935 --rc genhtml_legend=1 00:06:51.935 --rc geninfo_all_blocks=1 00:06:51.935 --rc geninfo_unexecuted_blocks=1 00:06:51.935 00:06:51.935 ' 00:06:51.935 14:11:19 version -- app/version.sh@17 -- # get_header_version major 00:06:51.935 14:11:19 version -- app/version.sh@14 -- # tr -d '"' 00:06:51.935 14:11:19 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:51.935 14:11:19 version -- app/version.sh@14 -- # cut -f2 00:06:51.935 14:11:19 version -- app/version.sh@17 -- # major=25 00:06:51.935 14:11:19 version -- app/version.sh@18 -- # get_header_version minor 00:06:51.935 14:11:19 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:51.935 14:11:19 version -- app/version.sh@14 -- # tr -d '"' 00:06:51.935 14:11:19 version -- app/version.sh@14 -- # cut -f2 00:06:51.935 14:11:19 version -- app/version.sh@18 -- # minor=1 00:06:51.935 14:11:19 version -- app/version.sh@19 -- # get_header_version patch 00:06:51.935 14:11:19 version -- app/version.sh@14 -- # cut -f2 00:06:51.935 14:11:19 version -- app/version.sh@14 -- # tr -d '"' 00:06:51.935 14:11:19 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:51.935 14:11:19 version -- app/version.sh@19 -- # patch=0 00:06:51.935 14:11:19 version -- app/version.sh@20 -- # get_header_version suffix 00:06:51.935 14:11:19 version -- app/version.sh@14 -- # cut -f2 00:06:51.935 14:11:19 version -- app/version.sh@14 -- # tr -d '"' 00:06:51.935 14:11:19 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:51.935 14:11:19 version -- app/version.sh@20 -- # suffix=-pre 00:06:51.935 14:11:19 version -- app/version.sh@22 -- # version=25.1 00:06:51.935 14:11:19 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:51.935 14:11:19 version -- app/version.sh@28 -- # version=25.1rc0 00:06:51.935 14:11:19 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:51.935 14:11:19 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:51.935 14:11:19 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:51.935 14:11:19 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:51.935 00:06:51.935 real 0m0.205s 00:06:51.935 user 0m0.138s 00:06:51.935 sys 0m0.096s 00:06:51.935 14:11:19 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:51.935 14:11:19 version -- common/autotest_common.sh@10 -- # set +x 00:06:51.935 ************************************ 00:06:51.935 END TEST version 00:06:51.935 ************************************ 00:06:51.935 14:11:19 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:51.935 14:11:19 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:51.935 14:11:19 -- spdk/autotest.sh@194 -- # uname -s 00:06:51.935 14:11:19 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:51.935 14:11:19 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:51.935 14:11:19 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:51.935 14:11:19 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:51.935 14:11:19 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:06:51.935 14:11:19 -- spdk/autotest.sh@256 -- # timing_exit lib 00:06:51.935 14:11:19 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:51.935 14:11:19 -- common/autotest_common.sh@10 -- # set +x 00:06:52.193 14:11:19 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:06:52.193 14:11:19 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:06:52.193 14:11:19 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:06:52.193 14:11:19 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:06:52.193 14:11:19 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:06:52.193 14:11:19 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:06:52.193 14:11:19 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:52.193 14:11:19 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:52.193 14:11:19 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:52.193 14:11:19 -- common/autotest_common.sh@10 -- # set +x 00:06:52.193 ************************************ 00:06:52.193 START TEST nvmf_tcp 00:06:52.193 ************************************ 00:06:52.193 14:11:19 nvmf_tcp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:52.193 * Looking for test storage... 00:06:52.193 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:06:52.193 14:11:19 nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:52.193 14:11:19 nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:06:52.193 14:11:19 nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:52.193 14:11:19 nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:52.193 14:11:19 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:52.193 14:11:19 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:52.193 14:11:19 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:52.193 14:11:19 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:52.193 14:11:19 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:52.193 14:11:19 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:52.193 14:11:19 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:52.193 14:11:19 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:52.193 14:11:19 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:52.193 14:11:19 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:52.193 14:11:19 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:52.193 14:11:19 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:52.193 14:11:19 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:06:52.193 14:11:19 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:52.193 14:11:19 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:52.193 14:11:19 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:52.193 14:11:19 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:06:52.193 14:11:19 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:52.193 14:11:19 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:06:52.193 14:11:19 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:52.193 14:11:19 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:52.193 14:11:19 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:06:52.193 14:11:19 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:52.193 14:11:19 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:06:52.193 14:11:19 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:52.193 14:11:19 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:52.193 14:11:19 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:52.193 14:11:19 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:06:52.193 14:11:19 nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:52.193 14:11:19 nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:52.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.193 --rc genhtml_branch_coverage=1 00:06:52.193 --rc genhtml_function_coverage=1 00:06:52.193 --rc genhtml_legend=1 00:06:52.193 --rc geninfo_all_blocks=1 00:06:52.193 --rc geninfo_unexecuted_blocks=1 00:06:52.193 00:06:52.193 ' 00:06:52.193 14:11:19 nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:52.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.193 --rc genhtml_branch_coverage=1 00:06:52.193 --rc genhtml_function_coverage=1 00:06:52.193 --rc genhtml_legend=1 00:06:52.193 --rc geninfo_all_blocks=1 00:06:52.193 --rc geninfo_unexecuted_blocks=1 00:06:52.193 00:06:52.193 ' 00:06:52.193 14:11:19 nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:52.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.193 --rc genhtml_branch_coverage=1 00:06:52.193 --rc genhtml_function_coverage=1 00:06:52.193 --rc genhtml_legend=1 00:06:52.193 --rc geninfo_all_blocks=1 00:06:52.193 --rc geninfo_unexecuted_blocks=1 00:06:52.193 00:06:52.193 ' 00:06:52.193 14:11:19 nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:52.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.193 --rc genhtml_branch_coverage=1 00:06:52.193 --rc genhtml_function_coverage=1 00:06:52.193 --rc genhtml_legend=1 00:06:52.193 --rc geninfo_all_blocks=1 00:06:52.193 --rc geninfo_unexecuted_blocks=1 00:06:52.193 00:06:52.193 ' 00:06:52.193 14:11:19 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:52.193 14:11:19 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:52.193 14:11:19 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:52.193 14:11:19 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:52.193 14:11:19 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:52.193 14:11:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:52.193 ************************************ 00:06:52.193 START TEST nvmf_target_core 00:06:52.193 ************************************ 00:06:52.193 14:11:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:52.193 * Looking for test storage... 00:06:52.193 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:06:52.193 14:11:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:52.193 14:11:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lcov --version 00:06:52.193 14:11:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:52.451 14:11:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:52.451 14:11:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:52.451 14:11:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:52.451 14:11:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:52.451 14:11:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:52.451 14:11:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:52.451 14:11:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:52.451 14:11:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:52.451 14:11:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:52.451 14:11:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:52.451 14:11:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:52.451 14:11:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:52.451 14:11:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:52.451 14:11:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:52.451 14:11:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:52.451 14:11:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:52.451 14:11:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:52.451 14:11:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:52.451 14:11:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:52.451 14:11:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:52.451 14:11:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:52.451 14:11:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:52.451 14:11:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:52.451 14:11:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:52.451 14:11:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:52.451 14:11:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:52.451 14:11:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:52.451 14:11:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:52.451 14:11:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:52.452 14:11:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:52.452 14:11:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:52.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.452 --rc genhtml_branch_coverage=1 00:06:52.452 --rc genhtml_function_coverage=1 00:06:52.452 --rc genhtml_legend=1 00:06:52.452 --rc geninfo_all_blocks=1 00:06:52.452 --rc geninfo_unexecuted_blocks=1 00:06:52.452 00:06:52.452 ' 00:06:52.452 14:11:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:52.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.452 --rc genhtml_branch_coverage=1 00:06:52.452 --rc genhtml_function_coverage=1 00:06:52.452 --rc genhtml_legend=1 00:06:52.452 --rc geninfo_all_blocks=1 00:06:52.452 --rc geninfo_unexecuted_blocks=1 00:06:52.452 00:06:52.452 ' 00:06:52.452 14:11:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:52.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.452 --rc genhtml_branch_coverage=1 00:06:52.452 --rc genhtml_function_coverage=1 00:06:52.452 --rc genhtml_legend=1 00:06:52.452 --rc geninfo_all_blocks=1 00:06:52.452 --rc geninfo_unexecuted_blocks=1 00:06:52.452 00:06:52.452 ' 00:06:52.452 14:11:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:52.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.452 --rc genhtml_branch_coverage=1 00:06:52.452 --rc genhtml_function_coverage=1 00:06:52.452 --rc genhtml_legend=1 00:06:52.452 --rc geninfo_all_blocks=1 00:06:52.452 --rc geninfo_unexecuted_blocks=1 00:06:52.452 00:06:52.452 ' 00:06:52.452 14:11:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:52.452 14:11:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:52.452 14:11:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:52.452 14:11:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:52.452 14:11:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:52.452 14:11:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:52.452 14:11:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:52.452 14:11:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:52.452 14:11:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:52.452 14:11:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:52.452 14:11:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:52.452 14:11:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:52.452 14:11:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:52.452 14:11:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:52.452 14:11:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:06:52.452 14:11:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:06:52.452 14:11:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:52.452 14:11:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:52.452 14:11:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:52.452 14:11:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:52.452 14:11:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:52.452 14:11:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:52.452 14:11:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:52.452 14:11:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:52.452 14:11:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:52.452 14:11:19 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.452 14:11:19 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.452 14:11:19 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.452 14:11:19 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:52.452 14:11:19 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.452 14:11:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:06:52.452 14:11:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:52.452 14:11:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:52.452 14:11:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:52.452 14:11:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:52.452 14:11:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:52.452 14:11:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:52.452 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:52.452 14:11:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:52.452 14:11:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:52.452 14:11:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:52.452 14:11:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:52.452 14:11:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:52.452 14:11:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:52.452 14:11:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:52.452 14:11:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:52.452 14:11:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:52.452 14:11:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:52.452 ************************************ 00:06:52.452 START TEST nvmf_abort 00:06:52.452 ************************************ 00:06:52.452 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:52.452 * Looking for test storage... 00:06:52.452 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:52.452 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:52.452 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:06:52.452 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:52.452 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:52.452 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:52.452 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:52.452 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:52.452 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:06:52.452 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:06:52.452 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:06:52.452 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:06:52.452 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:06:52.452 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:06:52.452 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:06:52.452 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:52.452 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:06:52.452 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:06:52.452 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:52.452 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:52.452 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:06:52.452 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:06:52.452 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:52.452 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:06:52.452 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:06:52.452 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:06:52.452 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:06:52.452 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:52.452 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:06:52.452 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:06:52.452 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:52.452 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:52.452 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:06:52.452 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:52.452 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:52.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.453 --rc genhtml_branch_coverage=1 00:06:52.453 --rc genhtml_function_coverage=1 00:06:52.453 --rc genhtml_legend=1 00:06:52.453 --rc geninfo_all_blocks=1 00:06:52.453 --rc geninfo_unexecuted_blocks=1 00:06:52.453 00:06:52.453 ' 00:06:52.453 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:52.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.453 --rc genhtml_branch_coverage=1 00:06:52.453 --rc genhtml_function_coverage=1 00:06:52.453 --rc genhtml_legend=1 00:06:52.453 --rc geninfo_all_blocks=1 00:06:52.453 --rc geninfo_unexecuted_blocks=1 00:06:52.453 00:06:52.453 ' 00:06:52.453 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:52.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.453 --rc genhtml_branch_coverage=1 00:06:52.453 --rc genhtml_function_coverage=1 00:06:52.453 --rc genhtml_legend=1 00:06:52.453 --rc geninfo_all_blocks=1 00:06:52.453 --rc geninfo_unexecuted_blocks=1 00:06:52.453 00:06:52.453 ' 00:06:52.453 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:52.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.453 --rc genhtml_branch_coverage=1 00:06:52.453 --rc genhtml_function_coverage=1 00:06:52.453 --rc genhtml_legend=1 00:06:52.453 --rc geninfo_all_blocks=1 00:06:52.453 --rc geninfo_unexecuted_blocks=1 00:06:52.453 00:06:52.453 ' 00:06:52.453 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:52.453 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:52.453 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:52.453 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:52.453 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:52.453 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:52.453 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:52.453 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:52.453 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:52.453 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:52.453 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:52.453 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:52.453 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:06:52.453 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:06:52.453 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:52.453 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:52.453 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:52.453 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:52.453 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:52.453 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:06:52.453 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:52.453 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:52.453 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:52.453 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.453 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.453 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.453 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:52.453 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.453 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:06:52.453 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:52.453 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:52.453 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:52.453 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:52.453 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:52.453 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:52.453 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:52.453 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:52.453 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:52.453 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:52.453 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:52.453 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:52.453 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:52.453 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:52.711 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:52.711 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:52.711 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:52.711 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:52.711 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:52.711 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:52.711 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:52.711 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:06:52.711 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:06:52.711 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:06:52.711 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:06:52.711 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:06:52.711 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@460 -- # nvmf_veth_init 00:06:52.711 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:52.711 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:06:52.711 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:06:52.711 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:06:52.711 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:52.711 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:06:52.711 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:52.711 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:06:52.711 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:52.711 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:06:52.711 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:52.711 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:52.711 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:52.711 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:52.711 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:52.711 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:52.711 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:06:52.711 Cannot find device "nvmf_init_br" 00:06:52.711 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@162 -- # true 00:06:52.711 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:06:52.711 Cannot find device "nvmf_init_br2" 00:06:52.711 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@163 -- # true 00:06:52.711 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:06:52.711 Cannot find device "nvmf_tgt_br" 00:06:52.711 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@164 -- # true 00:06:52.711 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:06:52.711 Cannot find device "nvmf_tgt_br2" 00:06:52.711 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@165 -- # true 00:06:52.711 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:06:52.711 Cannot find device "nvmf_init_br" 00:06:52.711 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@166 -- # true 00:06:52.711 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:06:52.711 Cannot find device "nvmf_init_br2" 00:06:52.711 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@167 -- # true 00:06:52.711 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:06:52.711 Cannot find device "nvmf_tgt_br" 00:06:52.711 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@168 -- # true 00:06:52.711 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:06:52.711 Cannot find device "nvmf_tgt_br2" 00:06:52.711 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@169 -- # true 00:06:52.711 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:06:52.711 Cannot find device "nvmf_br" 00:06:52.711 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@170 -- # true 00:06:52.711 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:06:52.711 Cannot find device "nvmf_init_if" 00:06:52.711 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@171 -- # true 00:06:52.711 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:06:52.711 Cannot find device "nvmf_init_if2" 00:06:52.711 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@172 -- # true 00:06:52.711 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:52.711 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:52.711 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@173 -- # true 00:06:52.711 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:52.711 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:52.711 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@174 -- # true 00:06:52.711 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:06:52.711 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:52.711 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:06:52.711 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:52.711 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:52.711 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:52.711 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:52.711 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:52.711 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:06:52.711 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:06:52.711 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:06:52.711 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:06:52.711 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:06:52.711 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:06:52.711 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:06:52.711 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:06:52.711 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:06:52.711 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:52.711 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:52.711 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:52.711 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:06:52.711 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:06:52.711 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:06:52.969 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:06:52.969 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:52.969 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:52.969 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:52.969 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:06:52.969 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:06:52.970 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:06:52.970 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:52.970 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:06:52.970 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:06:52.970 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:52.970 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:06:52.970 00:06:52.970 --- 10.0.0.3 ping statistics --- 00:06:52.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:52.970 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:06:52.970 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:06:52.970 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:06:52.970 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:06:52.970 00:06:52.970 --- 10.0.0.4 ping statistics --- 00:06:52.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:52.970 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:06:52.970 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:52.970 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:52.970 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:06:52.970 00:06:52.970 --- 10.0.0.1 ping statistics --- 00:06:52.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:52.970 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:06:52.970 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:06:52.970 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:52.970 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:06:52.970 00:06:52.970 --- 10.0.0.2 ping statistics --- 00:06:52.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:52.970 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:06:52.970 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:52.970 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@461 -- # return 0 00:06:52.970 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:52.970 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:52.970 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:52.970 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:52.970 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:52.970 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:52.970 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:52.970 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:52.970 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:52.970 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:52.970 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:52.970 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=62339 00:06:52.970 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 62339 00:06:52.970 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@833 -- # '[' -z 62339 ']' 00:06:52.970 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.970 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:52.970 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:52.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.970 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.970 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:52.970 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:52.970 [2024-11-06 14:11:20.513739] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:06:52.970 [2024-11-06 14:11:20.513872] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:53.237 [2024-11-06 14:11:20.668134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:53.237 [2024-11-06 14:11:20.773583] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:53.237 [2024-11-06 14:11:20.773638] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:53.237 [2024-11-06 14:11:20.773650] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:53.237 [2024-11-06 14:11:20.773658] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:53.237 [2024-11-06 14:11:20.773668] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:53.237 [2024-11-06 14:11:20.775077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:53.237 [2024-11-06 14:11:20.775177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:53.237 [2024-11-06 14:11:20.775311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.828 14:11:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:53.828 14:11:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@866 -- # return 0 00:06:53.828 14:11:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:53.828 14:11:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:53.828 14:11:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:53.828 14:11:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:53.828 14:11:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:53.828 14:11:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.828 14:11:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:53.828 [2024-11-06 14:11:21.402812] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:53.828 14:11:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.828 14:11:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:53.828 14:11:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.828 14:11:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:54.198 Malloc0 00:06:54.198 14:11:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.198 14:11:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:54.198 14:11:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.198 14:11:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:54.198 Delay0 00:06:54.198 14:11:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.198 14:11:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:54.198 14:11:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.198 14:11:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:54.198 14:11:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.198 14:11:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:54.198 14:11:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.198 14:11:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:54.198 14:11:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.198 14:11:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:06:54.198 14:11:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.198 14:11:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:54.198 [2024-11-06 14:11:21.508034] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:06:54.198 14:11:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.198 14:11:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:06:54.198 14:11:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.198 14:11:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:54.198 14:11:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.198 14:11:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:54.198 [2024-11-06 14:11:21.722425] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:56.731 Initializing NVMe Controllers 00:06:56.731 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:06:56.731 controller IO queue size 128 less than required 00:06:56.731 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:56.731 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:56.731 Initialization complete. Launching workers. 00:06:56.731 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 38148 00:06:56.731 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 38205, failed to submit 66 00:06:56.731 success 38148, unsuccessful 57, failed 0 00:06:56.731 14:11:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:56.731 14:11:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.731 14:11:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:56.731 14:11:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.731 14:11:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:56.731 14:11:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:56.731 14:11:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:56.731 14:11:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:06:56.731 14:11:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:56.731 14:11:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:06:56.731 14:11:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:56.731 14:11:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:56.731 rmmod nvme_tcp 00:06:56.731 rmmod nvme_fabrics 00:06:56.731 rmmod nvme_keyring 00:06:56.731 14:11:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:56.731 14:11:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:06:56.731 14:11:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:06:56.731 14:11:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 62339 ']' 00:06:56.731 14:11:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 62339 00:06:56.731 14:11:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@952 -- # '[' -z 62339 ']' 00:06:56.731 14:11:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # kill -0 62339 00:06:56.731 14:11:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # uname 00:06:56.731 14:11:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:56.731 14:11:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62339 00:06:56.731 14:11:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:06:56.731 14:11:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:06:56.731 killing process with pid 62339 00:06:56.731 14:11:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62339' 00:06:56.731 14:11:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@971 -- # kill 62339 00:06:56.731 14:11:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@976 -- # wait 62339 00:06:56.989 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:56.989 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:56.989 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:56.989 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:06:57.249 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:06:57.249 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:57.249 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:06:57.249 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:57.249 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:06:57.249 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:06:57.249 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:06:57.249 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:06:57.249 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:06:57.249 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:06:57.249 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:06:57.249 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:06:57.249 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:06:57.249 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:06:57.249 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:06:57.249 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:06:57.249 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:57.249 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:57.249 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@246 -- # remove_spdk_ns 00:06:57.249 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:57.249 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:57.249 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:57.249 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@300 -- # return 0 00:06:57.249 00:06:57.249 real 0m4.947s 00:06:57.249 user 0m13.437s 00:06:57.249 sys 0m0.978s 00:06:57.249 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:57.249 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:57.249 ************************************ 00:06:57.249 END TEST nvmf_abort 00:06:57.249 ************************************ 00:06:57.508 14:11:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:57.508 14:11:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:57.508 14:11:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:57.508 14:11:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:57.508 ************************************ 00:06:57.508 START TEST nvmf_ns_hotplug_stress 00:06:57.508 ************************************ 00:06:57.508 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:57.508 * Looking for test storage... 00:06:57.508 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:57.508 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:57.508 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:06:57.508 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:57.508 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:57.508 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:57.508 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:57.508 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:57.508 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:06:57.508 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:06:57.508 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:06:57.508 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:06:57.508 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:06:57.508 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:06:57.508 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:06:57.508 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:57.508 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:06:57.508 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:06:57.508 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:57.509 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:57.509 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:06:57.509 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:06:57.509 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:57.509 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:06:57.509 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:06:57.509 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:06:57.509 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:06:57.509 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:57.509 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:06:57.509 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:06:57.509 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:57.509 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:57.509 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:06:57.509 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:57.509 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:57.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.509 --rc genhtml_branch_coverage=1 00:06:57.509 --rc genhtml_function_coverage=1 00:06:57.509 --rc genhtml_legend=1 00:06:57.509 --rc geninfo_all_blocks=1 00:06:57.509 --rc geninfo_unexecuted_blocks=1 00:06:57.509 00:06:57.509 ' 00:06:57.509 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:57.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.509 --rc genhtml_branch_coverage=1 00:06:57.509 --rc genhtml_function_coverage=1 00:06:57.509 --rc genhtml_legend=1 00:06:57.509 --rc geninfo_all_blocks=1 00:06:57.509 --rc geninfo_unexecuted_blocks=1 00:06:57.509 00:06:57.509 ' 00:06:57.509 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:57.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.509 --rc genhtml_branch_coverage=1 00:06:57.509 --rc genhtml_function_coverage=1 00:06:57.509 --rc genhtml_legend=1 00:06:57.509 --rc geninfo_all_blocks=1 00:06:57.509 --rc geninfo_unexecuted_blocks=1 00:06:57.509 00:06:57.509 ' 00:06:57.509 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:57.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.509 --rc genhtml_branch_coverage=1 00:06:57.509 --rc genhtml_function_coverage=1 00:06:57.509 --rc genhtml_legend=1 00:06:57.509 --rc geninfo_all_blocks=1 00:06:57.509 --rc geninfo_unexecuted_blocks=1 00:06:57.509 00:06:57.509 ' 00:06:57.509 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:57.509 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:57.509 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:57.509 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:57.509 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:57.509 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:57.509 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:57.509 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:57.509 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:57.509 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:57.509 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:57.509 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:57.509 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:06:57.509 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:06:57.509 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:57.509 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:57.509 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:57.509 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:57.509 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:57.509 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:06:57.509 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:57.509 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:57.509 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:57.509 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.509 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.509 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.509 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:57.509 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.509 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:06:57.509 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:57.509 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:57.509 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:57.509 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:57.509 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:57.509 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:57.509 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:57.509 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:57.509 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:57.509 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:57.510 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:57.510 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:57.510 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:57.510 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:57.510 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:57.510 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:57.510 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:57.510 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:57.510 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:57.510 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:57.510 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:06:57.510 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:06:57.510 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:06:57.510 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:06:57.510 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:06:57.510 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@460 -- # nvmf_veth_init 00:06:57.510 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:57.510 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:06:57.510 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:06:57.510 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:06:57.510 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:57.510 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:06:57.510 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:57.510 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:06:57.510 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:57.510 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:06:57.510 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:57.510 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:57.510 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:57.510 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:57.510 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:57.510 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:57.510 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:06:57.510 Cannot find device "nvmf_init_br" 00:06:57.510 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true 00:06:57.510 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:06:57.510 Cannot find device "nvmf_init_br2" 00:06:57.510 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true 00:06:57.510 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:06:57.510 Cannot find device "nvmf_tgt_br" 00:06:57.510 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # true 00:06:57.510 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:06:57.510 Cannot find device "nvmf_tgt_br2" 00:06:57.510 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # true 00:06:57.510 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:06:57.510 Cannot find device "nvmf_init_br" 00:06:57.510 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # true 00:06:57.510 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:06:57.510 Cannot find device "nvmf_init_br2" 00:06:57.510 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # true 00:06:57.510 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:06:57.510 Cannot find device "nvmf_tgt_br" 00:06:57.510 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # true 00:06:57.510 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:06:57.510 Cannot find device "nvmf_tgt_br2" 00:06:57.510 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # true 00:06:57.510 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:06:57.510 Cannot find device "nvmf_br" 00:06:57.510 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # true 00:06:57.510 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:06:57.769 Cannot find device "nvmf_init_if" 00:06:57.769 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # true 00:06:57.769 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:06:57.769 Cannot find device "nvmf_init_if2" 00:06:57.769 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # true 00:06:57.769 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:57.769 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:57.769 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # true 00:06:57.769 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:57.769 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:57.769 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # true 00:06:57.770 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:06:57.770 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:57.770 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:06:57.770 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:57.770 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:57.770 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:57.770 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:57.770 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:57.770 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:06:57.770 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:06:57.770 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:06:57.770 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:06:57.770 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:06:57.770 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:06:57.770 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:06:57.770 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:06:57.770 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:06:57.770 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:57.770 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:57.770 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:57.770 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:06:57.770 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:06:57.770 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:06:57.770 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:06:57.770 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:57.770 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:57.770 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:57.770 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:06:57.770 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:06:57.770 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:06:57.770 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:57.770 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:06:57.770 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:06:57.770 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:57.770 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:06:57.770 00:06:57.770 --- 10.0.0.3 ping statistics --- 00:06:57.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:57.770 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:06:57.770 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:06:57.770 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:06:57.770 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.030 ms 00:06:57.770 00:06:57.770 --- 10.0.0.4 ping statistics --- 00:06:57.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:57.770 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:06:57.770 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:57.770 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:57.770 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.015 ms 00:06:57.770 00:06:57.770 --- 10.0.0.1 ping statistics --- 00:06:57.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:57.770 rtt min/avg/max/mdev = 0.015/0.015/0.015/0.000 ms 00:06:57.770 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:06:57.770 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:57.770 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.109 ms 00:06:57.770 00:06:57.770 --- 10.0.0.2 ping statistics --- 00:06:57.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:57.770 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:06:57.770 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:57.770 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@461 -- # return 0 00:06:57.770 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:57.770 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:57.770 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:57.770 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:57.770 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:57.770 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:57.770 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:57.770 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:57.770 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:57.770 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:57.770 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:57.770 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=62655 00:06:57.770 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 62655 00:06:57.770 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # '[' -z 62655 ']' 00:06:57.770 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.770 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:57.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.770 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:57.770 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.770 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:57.770 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:58.028 [2024-11-06 14:11:25.427426] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:06:58.028 [2024-11-06 14:11:25.427529] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:58.028 [2024-11-06 14:11:25.580477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:58.287 [2024-11-06 14:11:25.692222] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:58.287 [2024-11-06 14:11:25.692276] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:58.287 [2024-11-06 14:11:25.692288] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:58.287 [2024-11-06 14:11:25.692296] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:58.287 [2024-11-06 14:11:25.692306] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:58.287 [2024-11-06 14:11:25.693726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:58.287 [2024-11-06 14:11:25.694028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:58.287 [2024-11-06 14:11:25.694106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:58.852 14:11:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:58.852 14:11:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@866 -- # return 0 00:06:58.852 14:11:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:58.852 14:11:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:58.852 14:11:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:58.852 14:11:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:58.852 14:11:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:58.852 14:11:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:58.852 [2024-11-06 14:11:26.468266] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:59.109 14:11:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:59.109 14:11:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:06:59.366 [2024-11-06 14:11:26.812700] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:06:59.366 14:11:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:06:59.366 14:11:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:59.624 Malloc0 00:06:59.624 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:59.881 Delay0 00:06:59.881 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:00.138 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:00.395 NULL1 00:07:00.395 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:00.395 14:11:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=62781 00:07:00.395 14:11:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:00.395 14:11:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62781 00:07:00.395 14:11:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.651 14:11:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:00.909 14:11:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:00.909 14:11:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:01.170 true 00:07:01.170 14:11:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62781 00:07:01.170 14:11:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.430 14:11:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:01.688 14:11:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:01.688 14:11:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:01.688 true 00:07:01.947 14:11:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62781 00:07:01.947 14:11:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.947 14:11:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:02.203 14:11:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:02.203 14:11:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:02.460 true 00:07:02.460 14:11:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62781 00:07:02.460 14:11:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.717 14:11:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:02.974 14:11:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:02.974 14:11:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:03.232 true 00:07:03.232 14:11:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62781 00:07:03.232 14:11:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.490 14:11:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.747 14:11:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:03.747 14:11:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:03.747 true 00:07:03.747 14:11:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62781 00:07:03.747 14:11:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.005 14:11:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:04.263 14:11:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:04.263 14:11:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:04.263 true 00:07:04.263 14:11:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62781 00:07:04.263 14:11:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.521 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:04.777 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:04.777 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:05.034 true 00:07:05.034 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62781 00:07:05.034 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.292 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:05.550 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:05.550 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:05.550 true 00:07:05.550 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62781 00:07:05.550 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.808 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:06.065 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:06.066 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:06.323 true 00:07:06.323 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62781 00:07:06.323 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.581 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:06.581 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:06.581 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:06.838 true 00:07:06.838 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62781 00:07:06.838 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.096 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:07.355 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:07.355 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:07.615 true 00:07:07.615 14:11:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62781 00:07:07.615 14:11:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.876 14:11:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:08.136 14:11:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:08.136 14:11:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:08.394 true 00:07:08.394 14:11:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62781 00:07:08.394 14:11:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.653 14:11:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:08.911 14:11:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:08.911 14:11:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:08.911 true 00:07:08.911 14:11:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62781 00:07:08.911 14:11:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.169 14:11:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:09.427 14:11:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:09.427 14:11:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:09.685 true 00:07:09.685 14:11:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62781 00:07:09.685 14:11:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.685 14:11:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:09.942 14:11:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:09.942 14:11:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:10.236 true 00:07:10.236 14:11:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62781 00:07:10.236 14:11:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:10.495 14:11:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:10.753 14:11:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:10.753 14:11:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:10.753 true 00:07:10.753 14:11:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62781 00:07:10.753 14:11:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.011 14:11:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:11.269 14:11:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:11.269 14:11:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:11.528 true 00:07:11.528 14:11:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62781 00:07:11.528 14:11:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.786 14:11:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:11.786 14:11:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:11.786 14:11:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:12.044 true 00:07:12.044 14:11:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62781 00:07:12.044 14:11:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.301 14:11:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:12.558 14:11:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:12.558 14:11:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:12.816 true 00:07:12.816 14:11:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62781 00:07:12.816 14:11:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.075 14:11:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:13.332 14:11:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:13.332 14:11:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:13.332 true 00:07:13.332 14:11:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62781 00:07:13.332 14:11:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.590 14:11:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:13.847 14:11:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:13.847 14:11:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:14.105 true 00:07:14.105 14:11:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62781 00:07:14.105 14:11:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.362 14:11:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:14.618 14:11:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:14.618 14:11:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:14.875 true 00:07:14.875 14:11:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62781 00:07:14.875 14:11:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.133 14:11:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:15.133 14:11:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:15.133 14:11:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:15.391 true 00:07:15.391 14:11:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62781 00:07:15.392 14:11:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.650 14:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:15.908 14:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:15.908 14:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:15.908 true 00:07:16.165 14:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62781 00:07:16.165 14:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:16.165 14:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:16.423 14:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:16.423 14:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:16.423 true 00:07:16.679 14:11:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62781 00:07:16.679 14:11:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:16.680 14:11:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:16.937 14:11:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:16.937 14:11:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:17.194 true 00:07:17.194 14:11:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62781 00:07:17.194 14:11:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:17.194 14:11:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:17.451 14:11:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:17.451 14:11:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:17.709 true 00:07:17.709 14:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62781 00:07:17.709 14:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:17.709 14:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:17.966 14:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:17.966 14:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:18.224 true 00:07:18.224 14:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62781 00:07:18.224 14:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:18.482 14:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:18.739 14:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:18.739 14:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:18.997 true 00:07:18.997 14:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62781 00:07:18.997 14:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:19.254 14:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:19.254 14:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:07:19.254 14:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:07:19.511 true 00:07:19.511 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62781 00:07:19.511 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:19.768 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:20.026 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:07:20.026 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:07:20.285 true 00:07:20.285 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62781 00:07:20.285 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.543 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:20.543 14:11:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:07:20.543 14:11:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:07:20.800 true 00:07:20.800 14:11:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62781 00:07:20.800 14:11:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.056 14:11:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:21.314 14:11:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:07:21.314 14:11:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:07:21.573 true 00:07:21.573 14:11:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62781 00:07:21.573 14:11:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.831 14:11:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:22.090 14:11:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:07:22.090 14:11:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:07:22.090 true 00:07:22.090 14:11:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62781 00:07:22.090 14:11:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.348 14:11:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:22.606 14:11:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:07:22.606 14:11:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:07:22.863 true 00:07:22.863 14:11:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62781 00:07:22.863 14:11:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:23.120 14:11:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:23.120 14:11:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:07:23.120 14:11:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:07:23.378 true 00:07:23.378 14:11:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62781 00:07:23.378 14:11:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:23.635 14:11:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:23.896 14:11:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:07:23.896 14:11:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:07:23.896 true 00:07:23.896 14:11:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62781 00:07:23.896 14:11:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:24.153 14:11:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:24.410 14:11:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:07:24.410 14:11:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:07:24.667 true 00:07:24.667 14:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62781 00:07:24.667 14:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:24.924 14:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:24.924 14:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:07:24.924 14:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:07:25.180 true 00:07:25.180 14:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62781 00:07:25.180 14:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:25.437 14:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:25.695 14:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:07:25.695 14:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:07:25.952 true 00:07:25.952 14:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62781 00:07:25.952 14:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:26.212 14:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:26.212 14:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:07:26.212 14:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:07:26.470 true 00:07:26.470 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62781 00:07:26.470 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:26.727 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:26.985 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:07:26.985 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:07:27.244 true 00:07:27.244 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62781 00:07:27.244 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:27.501 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:27.501 14:11:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:07:27.501 14:11:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:07:27.758 true 00:07:27.758 14:11:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62781 00:07:27.758 14:11:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:28.015 14:11:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:28.274 14:11:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:07:28.274 14:11:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:07:28.274 true 00:07:28.274 14:11:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62781 00:07:28.274 14:11:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:28.545 14:11:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:28.803 14:11:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:07:28.803 14:11:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:07:28.803 true 00:07:28.803 14:11:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62781 00:07:28.803 14:11:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.059 14:11:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:29.316 14:11:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:07:29.316 14:11:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:07:29.316 true 00:07:29.316 14:11:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62781 00:07:29.316 14:11:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.572 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:29.829 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:07:29.829 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:07:30.087 true 00:07:30.087 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62781 00:07:30.087 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:30.344 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:30.602 14:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:07:30.602 14:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:07:30.602 Initializing NVMe Controllers 00:07:30.602 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:07:30.602 Controller IO queue size 128, less than required. 00:07:30.602 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:30.602 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:30.602 Initialization complete. Launching workers. 00:07:30.602 ======================================================== 00:07:30.602 Latency(us) 00:07:30.602 Device Information : IOPS MiB/s Average min max 00:07:30.602 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 23637.22 11.54 5415.10 2929.86 15259.10 00:07:30.602 ======================================================== 00:07:30.602 Total : 23637.22 11.54 5415.10 2929.86 15259.10 00:07:30.602 00:07:30.859 true 00:07:30.859 14:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62781 00:07:30.859 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (62781) - No such process 00:07:30.859 14:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 62781 00:07:30.859 14:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:30.859 14:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:31.117 14:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:31.117 14:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:31.117 14:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:31.117 14:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:31.117 14:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:31.376 null0 00:07:31.376 14:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:31.376 14:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:31.376 14:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:31.634 null1 00:07:31.634 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:31.634 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:31.634 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:31.891 null2 00:07:31.891 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:31.891 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:31.891 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:31.891 null3 00:07:31.891 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:31.892 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:31.892 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:32.149 null4 00:07:32.149 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:32.149 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:32.149 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:32.407 null5 00:07:32.407 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:32.407 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:32.407 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:32.665 null6 00:07:32.665 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:32.665 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:32.665 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:32.665 null7 00:07:32.924 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:32.924 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:32.924 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:32.924 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:32.924 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:32.924 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:32.924 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:32.924 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:32.924 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:32.924 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.924 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:32.924 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:32.924 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:32.924 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:32.924 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:32.924 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:32.924 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:32.924 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:32.924 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.924 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:32.924 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:32.924 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:32.924 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:32.924 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:32.924 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:32.924 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:32.924 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.924 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:32.924 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:32.924 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:32.924 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:32.924 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:32.924 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:32.924 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:32.924 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.924 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:32.924 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:32.924 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:32.924 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:32.924 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:32.924 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:32.924 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:32.924 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.924 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:32.924 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:32.924 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:32.924 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:32.924 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:32.924 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:32.924 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:32.924 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.924 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:32.924 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:32.924 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:32.924 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:32.924 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.924 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:32.924 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:32.924 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:32.924 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:32.925 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:32.925 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:32.925 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:32.925 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:32.925 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:32.925 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:32.925 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 64213 64215 64216 64218 64220 64222 64224 64227 00:07:32.925 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.925 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:32.925 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:32.925 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:32.925 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:32.925 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:32.925 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:32.925 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:33.182 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:33.182 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:33.182 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.182 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.182 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:33.182 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.182 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.182 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:33.182 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.182 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.182 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:33.182 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.182 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.182 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:33.182 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.182 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.182 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:33.182 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.182 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.182 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:33.182 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.182 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.182 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:33.440 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.440 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.440 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:33.440 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:33.440 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:33.440 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.440 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:33.440 14:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:33.440 14:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:33.440 14:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:33.440 14:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:33.697 14:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.697 14:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.697 14:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:33.697 14:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.697 14:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.697 14:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:33.697 14:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.697 14:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.697 14:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:33.697 14:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.697 14:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.697 14:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:33.697 14:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.697 14:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.697 14:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:33.697 14:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.697 14:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.697 14:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:33.697 14:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.697 14:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.697 14:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:33.697 14:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:33.697 14:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.697 14:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.697 14:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:33.697 14:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:33.954 14:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:33.954 14:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.954 14:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:33.954 14:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:33.954 14:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.954 14:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.954 14:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:33.954 14:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:33.954 14:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.954 14:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.954 14:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:33.954 14:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:34.242 14:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.242 14:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.242 14:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:34.242 14:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.242 14:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.242 14:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:34.242 14:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.242 14:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.242 14:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:34.242 14:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.242 14:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.242 14:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:34.242 14:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:34.242 14:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.242 14:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.242 14:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:34.242 14:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:34.242 14:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.242 14:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.242 14:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:34.242 14:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:34.512 14:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:34.512 14:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:34.512 14:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:34.512 14:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.512 14:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.512 14:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:34.512 14:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:34.512 14:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.512 14:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.512 14:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:34.512 14:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.512 14:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.512 14:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:34.512 14:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:34.512 14:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.512 14:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.512 14:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:34.512 14:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.512 14:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.512 14:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:34.771 14:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:34.771 14:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:34.771 14:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.771 14:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.771 14:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:34.771 14:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.771 14:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.771 14:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:34.771 14:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:34.771 14:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:34.771 14:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.771 14:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.771 14:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:34.771 14:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.771 14:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.771 14:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:34.771 14:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:34.771 14:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.771 14:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.771 14:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:34.771 14:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:35.028 14:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.028 14:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.028 14:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:35.028 14:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:35.028 14:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:35.028 14:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.028 14:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.028 14:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:35.028 14:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.028 14:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.028 14:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:35.028 14:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:35.028 14:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.028 14:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.028 14:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:35.286 14:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.286 14:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.286 14:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.286 14:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:35.286 14:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.286 14:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.286 14:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:35.286 14:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:35.286 14:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:35.286 14:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:35.286 14:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.286 14:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.286 14:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:35.286 14:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:35.286 14:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:35.286 14:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.286 14:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.286 14:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:35.286 14:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.286 14:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.286 14:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:35.543 14:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:35.543 14:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.543 14:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.543 14:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:35.543 14:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.543 14:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.543 14:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:35.543 14:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.543 14:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.543 14:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:35.543 14:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:35.543 14:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.543 14:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.543 14:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:35.543 14:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:35.543 14:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.543 14:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.543 14:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.543 14:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:35.801 14:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:35.801 14:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:35.801 14:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:35.801 14:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:35.801 14:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.801 14:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.801 14:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:35.801 14:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.801 14:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.801 14:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:35.801 14:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.801 14:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.801 14:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:35.801 14:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:35.801 14:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.801 14:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.801 14:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:35.801 14:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.801 14:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.801 14:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:36.064 14:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.064 14:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.064 14:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:36.064 14:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:36.064 14:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.064 14:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.064 14:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:36.064 14:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:36.064 14:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:36.064 14:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:36.064 14:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.064 14:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.065 14:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:36.065 14:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.065 14:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.065 14:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:36.065 14:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:36.065 14:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:36.351 14:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.351 14:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.351 14:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:36.351 14:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.351 14:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.351 14:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:36.351 14:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:36.351 14:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.351 14:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.351 14:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:36.351 14:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:36.351 14:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:36.351 14:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.351 14:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.351 14:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:36.351 14:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.351 14:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.351 14:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:36.351 14:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:36.351 14:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:36.610 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.610 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.610 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:36.610 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:36.610 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.610 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.610 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:36.610 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.610 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.610 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:36.610 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:36.610 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.610 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.610 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:36.610 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:36.610 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.610 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.611 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:36.611 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:36.611 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.611 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.611 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:36.868 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:36.868 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:36.868 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.868 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.868 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:36.868 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:36.868 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.868 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.868 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:36.868 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:36.868 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.868 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.868 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.868 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.868 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:36.869 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:36.869 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.869 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.869 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:36.869 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:37.126 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:37.126 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.126 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.126 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.126 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.126 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:37.126 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.126 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.126 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.126 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.126 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:37.126 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.126 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.384 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.384 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.384 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.384 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.384 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:37.384 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:37.384 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:37.384 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:07:37.384 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:37.384 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:07:37.384 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:37.384 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:37.384 rmmod nvme_tcp 00:07:37.384 rmmod nvme_fabrics 00:07:37.384 rmmod nvme_keyring 00:07:37.384 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:37.384 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:07:37.384 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:07:37.384 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 62655 ']' 00:07:37.384 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 62655 00:07:37.384 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' -z 62655 ']' 00:07:37.384 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # kill -0 62655 00:07:37.384 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # uname 00:07:37.384 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:37.384 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62655 00:07:37.384 killing process with pid 62655 00:07:37.384 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:07:37.384 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:07:37.384 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62655' 00:07:37.384 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # kill 62655 00:07:37.384 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@976 -- # wait 62655 00:07:38.332 14:12:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:38.332 14:12:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:38.332 14:12:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:38.332 14:12:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:07:38.332 14:12:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:07:38.332 14:12:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:38.333 14:12:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:07:38.333 14:12:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:38.333 14:12:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:38.333 14:12:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:38.333 14:12:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:38.333 14:12:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:38.333 14:12:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:38.333 14:12:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:38.333 14:12:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:38.333 14:12:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:38.333 14:12:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:38.333 14:12:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:38.333 14:12:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:38.333 14:12:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:38.333 14:12:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:38.333 14:12:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:38.333 14:12:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:38.333 14:12:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:38.333 14:12:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:38.333 14:12:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:38.333 14:12:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@300 -- # return 0 00:07:38.333 00:07:38.333 real 0m40.986s 00:07:38.333 user 3m19.481s 00:07:38.333 sys 0m10.948s 00:07:38.333 14:12:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:38.333 14:12:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:38.333 ************************************ 00:07:38.333 END TEST nvmf_ns_hotplug_stress 00:07:38.333 ************************************ 00:07:38.333 14:12:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:38.333 14:12:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:38.333 14:12:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:38.333 14:12:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:38.333 ************************************ 00:07:38.333 START TEST nvmf_delete_subsystem 00:07:38.333 ************************************ 00:07:38.333 14:12:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:38.592 * Looking for test storage... 00:07:38.592 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:38.592 14:12:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:38.592 14:12:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:07:38.592 14:12:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:38.592 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:38.592 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:38.592 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:38.592 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:38.592 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:07:38.592 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:07:38.592 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:07:38.592 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:07:38.592 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:07:38.592 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:07:38.592 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:07:38.592 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:38.592 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:07:38.592 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:07:38.592 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:38.592 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:38.592 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:07:38.592 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:07:38.592 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:38.592 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:07:38.592 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:07:38.592 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:07:38.592 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:07:38.592 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:38.592 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:07:38.592 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:07:38.592 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:38.592 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:38.592 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:07:38.592 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:38.592 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:38.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.592 --rc genhtml_branch_coverage=1 00:07:38.592 --rc genhtml_function_coverage=1 00:07:38.592 --rc genhtml_legend=1 00:07:38.592 --rc geninfo_all_blocks=1 00:07:38.592 --rc geninfo_unexecuted_blocks=1 00:07:38.592 00:07:38.592 ' 00:07:38.592 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:38.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.592 --rc genhtml_branch_coverage=1 00:07:38.592 --rc genhtml_function_coverage=1 00:07:38.592 --rc genhtml_legend=1 00:07:38.592 --rc geninfo_all_blocks=1 00:07:38.592 --rc geninfo_unexecuted_blocks=1 00:07:38.592 00:07:38.592 ' 00:07:38.592 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:38.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.592 --rc genhtml_branch_coverage=1 00:07:38.592 --rc genhtml_function_coverage=1 00:07:38.592 --rc genhtml_legend=1 00:07:38.592 --rc geninfo_all_blocks=1 00:07:38.592 --rc geninfo_unexecuted_blocks=1 00:07:38.592 00:07:38.592 ' 00:07:38.592 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:38.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.592 --rc genhtml_branch_coverage=1 00:07:38.592 --rc genhtml_function_coverage=1 00:07:38.592 --rc genhtml_legend=1 00:07:38.592 --rc geninfo_all_blocks=1 00:07:38.592 --rc geninfo_unexecuted_blocks=1 00:07:38.592 00:07:38.592 ' 00:07:38.592 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:38.592 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:38.592 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:38.592 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:38.592 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:38.592 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:38.592 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:38.592 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:38.592 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:38.592 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:38.592 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:38.592 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:38.592 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:07:38.592 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:07:38.592 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:38.592 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:38.592 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:38.592 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:38.592 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:38.592 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:07:38.592 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:38.592 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:38.592 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:38.593 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.593 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.593 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.593 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:38.593 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.593 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:07:38.593 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:38.593 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:38.593 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:38.593 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:38.593 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:38.593 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:38.593 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:38.593 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:38.593 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:38.593 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:38.593 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:38.593 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:38.593 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:38.593 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:38.593 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:38.593 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:38.593 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:38.593 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:38.593 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:38.593 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:38.593 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:38.593 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:38.593 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:38.593 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:38.593 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:38.593 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:38.593 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:38.593 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:38.593 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:38.593 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:38.593 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:38.593 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:38.593 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:38.593 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:38.593 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:38.593 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:38.593 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:38.593 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:38.593 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:38.593 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:38.593 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:38.593 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:38.593 Cannot find device "nvmf_init_br" 00:07:38.593 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true 00:07:38.593 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:38.593 Cannot find device "nvmf_init_br2" 00:07:38.593 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true 00:07:38.593 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:38.593 Cannot find device "nvmf_tgt_br" 00:07:38.593 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # true 00:07:38.593 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:38.593 Cannot find device "nvmf_tgt_br2" 00:07:38.593 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # true 00:07:38.593 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:38.593 Cannot find device "nvmf_init_br" 00:07:38.593 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # true 00:07:38.593 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:38.593 Cannot find device "nvmf_init_br2" 00:07:38.593 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # true 00:07:38.593 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:38.593 Cannot find device "nvmf_tgt_br" 00:07:38.593 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # true 00:07:38.593 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:38.593 Cannot find device "nvmf_tgt_br2" 00:07:38.593 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # true 00:07:38.593 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:38.593 Cannot find device "nvmf_br" 00:07:38.593 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # true 00:07:38.593 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:38.593 Cannot find device "nvmf_init_if" 00:07:38.593 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # true 00:07:38.593 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:38.593 Cannot find device "nvmf_init_if2" 00:07:38.593 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # true 00:07:38.593 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:38.593 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:38.593 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # true 00:07:38.593 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:38.593 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:38.593 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # true 00:07:38.593 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:38.593 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:38.593 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:38.593 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:38.593 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:38.851 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:38.851 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:38.851 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:38.851 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:38.852 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:38.852 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:38.852 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:38.852 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:38.852 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:38.852 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:38.852 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:38.852 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:38.852 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:38.852 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:38.852 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:38.852 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:38.852 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:38.852 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:38.852 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:38.852 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:38.852 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:38.852 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:38.852 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:38.852 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:38.852 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:38.852 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:38.852 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:38.852 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:38.852 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:38.852 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:07:38.852 00:07:38.852 --- 10.0.0.3 ping statistics --- 00:07:38.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:38.852 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:07:38.852 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:38.852 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:38.852 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:07:38.852 00:07:38.852 --- 10.0.0.4 ping statistics --- 00:07:38.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:38.852 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:07:38.852 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:38.852 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:38.852 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:07:38.852 00:07:38.852 --- 10.0.0.1 ping statistics --- 00:07:38.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:38.852 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:07:38.852 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:38.852 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:38.852 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:07:38.852 00:07:38.852 --- 10.0.0.2 ping statistics --- 00:07:38.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:38.852 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:07:38.852 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:38.852 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@461 -- # return 0 00:07:38.852 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:38.852 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:38.852 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:38.852 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:38.852 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:38.852 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:38.852 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:38.852 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:38.852 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:38.852 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:38.852 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:38.852 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=65610 00:07:38.852 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 65610 00:07:38.852 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # '[' -z 65610 ']' 00:07:38.852 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.852 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:38.852 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.852 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:38.852 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:38.852 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:39.110 [2024-11-06 14:12:06.488108] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:07:39.110 [2024-11-06 14:12:06.488226] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:39.110 [2024-11-06 14:12:06.647385] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:39.366 [2024-11-06 14:12:06.748321] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:39.367 [2024-11-06 14:12:06.748374] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:39.367 [2024-11-06 14:12:06.748388] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:39.367 [2024-11-06 14:12:06.748401] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:39.367 [2024-11-06 14:12:06.748411] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:39.367 [2024-11-06 14:12:06.749830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.367 [2024-11-06 14:12:06.749840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:39.949 14:12:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:39.949 14:12:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@866 -- # return 0 00:07:39.949 14:12:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:39.949 14:12:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:39.949 14:12:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:39.949 14:12:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:39.949 14:12:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:39.949 14:12:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.949 14:12:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:39.949 [2024-11-06 14:12:07.380473] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:39.949 14:12:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.949 14:12:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:39.949 14:12:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.949 14:12:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:39.949 14:12:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.949 14:12:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:07:39.949 14:12:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.949 14:12:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:39.949 [2024-11-06 14:12:07.397010] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:39.949 14:12:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.949 14:12:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:39.949 14:12:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.949 14:12:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:39.949 NULL1 00:07:39.949 14:12:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.949 14:12:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:39.949 14:12:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.949 14:12:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:39.949 Delay0 00:07:39.949 14:12:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.949 14:12:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:39.949 14:12:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.949 14:12:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:39.949 14:12:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.949 14:12:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=65661 00:07:39.949 14:12:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:39.949 14:12:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:40.207 [2024-11-06 14:12:07.621877] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:42.106 14:12:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:42.106 14:12:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.106 14:12:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 starting I/O failed: -6 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 Write completed with error (sct=0, sc=8) 00:07:42.106 Write completed with error (sct=0, sc=8) 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 starting I/O failed: -6 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 starting I/O failed: -6 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 Write completed with error (sct=0, sc=8) 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 starting I/O failed: -6 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 starting I/O failed: -6 00:07:42.106 Write completed with error (sct=0, sc=8) 00:07:42.106 Write completed with error (sct=0, sc=8) 00:07:42.106 Write completed with error (sct=0, sc=8) 00:07:42.106 Write completed with error (sct=0, sc=8) 00:07:42.106 starting I/O failed: -6 00:07:42.106 Write completed with error (sct=0, sc=8) 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 Write completed with error (sct=0, sc=8) 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 starting I/O failed: -6 00:07:42.106 Write completed with error (sct=0, sc=8) 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 starting I/O failed: -6 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 Write completed with error (sct=0, sc=8) 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 Write completed with error (sct=0, sc=8) 00:07:42.106 starting I/O failed: -6 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 Write completed with error (sct=0, sc=8) 00:07:42.106 starting I/O failed: -6 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 Write completed with error (sct=0, sc=8) 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 starting I/O failed: -6 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 [2024-11-06 14:12:09.667246] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000010480 is same with the state(6) to be set 00:07:42.106 Write completed with error (sct=0, sc=8) 00:07:42.106 Write completed with error (sct=0, sc=8) 00:07:42.106 Write completed with error (sct=0, sc=8) 00:07:42.106 starting I/O failed: -6 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 Write completed with error (sct=0, sc=8) 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 starting I/O failed: -6 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 Write completed with error (sct=0, sc=8) 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 starting I/O failed: -6 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 starting I/O failed: -6 00:07:42.106 Write completed with error (sct=0, sc=8) 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 starting I/O failed: -6 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 Write completed with error (sct=0, sc=8) 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 starting I/O failed: -6 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 starting I/O failed: -6 00:07:42.106 Write completed with error (sct=0, sc=8) 00:07:42.106 Write completed with error (sct=0, sc=8) 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 starting I/O failed: -6 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 Write completed with error (sct=0, sc=8) 00:07:42.106 Write completed with error (sct=0, sc=8) 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 starting I/O failed: -6 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 Write completed with error (sct=0, sc=8) 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 starting I/O failed: -6 00:07:42.106 Write completed with error (sct=0, sc=8) 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 starting I/O failed: -6 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 Write completed with error (sct=0, sc=8) 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 starting I/O failed: -6 00:07:42.106 [2024-11-06 14:12:09.668859] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020100 is same with the state(6) to be set 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 Write completed with error (sct=0, sc=8) 00:07:42.106 Write completed with error (sct=0, sc=8) 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 Write completed with error (sct=0, sc=8) 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 Write completed with error (sct=0, sc=8) 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 Write completed with error (sct=0, sc=8) 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 Write completed with error (sct=0, sc=8) 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 Write completed with error (sct=0, sc=8) 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 Write completed with error (sct=0, sc=8) 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 Write completed with error (sct=0, sc=8) 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.106 Read completed with error (sct=0, sc=8) 00:07:42.107 Read completed with error (sct=0, sc=8) 00:07:42.107 Write completed with error (sct=0, sc=8) 00:07:42.107 Read completed with error (sct=0, sc=8) 00:07:42.107 Read completed with error (sct=0, sc=8) 00:07:42.107 Read completed with error (sct=0, sc=8) 00:07:42.107 Read completed with error (sct=0, sc=8) 00:07:42.107 Read completed with error (sct=0, sc=8) 00:07:42.107 Read completed with error (sct=0, sc=8) 00:07:42.107 Read completed with error (sct=0, sc=8) 00:07:42.107 Read completed with error (sct=0, sc=8) 00:07:42.107 Read completed with error (sct=0, sc=8) 00:07:42.107 Read completed with error (sct=0, sc=8) 00:07:42.107 Write completed with error (sct=0, sc=8) 00:07:42.107 Write completed with error (sct=0, sc=8) 00:07:42.107 Write completed with error (sct=0, sc=8) 00:07:42.107 Read completed with error (sct=0, sc=8) 00:07:42.107 Read completed with error (sct=0, sc=8) 00:07:42.107 Read completed with error (sct=0, sc=8) 00:07:42.107 Write completed with error (sct=0, sc=8) 00:07:42.107 Write completed with error (sct=0, sc=8) 00:07:42.107 Read completed with error (sct=0, sc=8) 00:07:42.107 Read completed with error (sct=0, sc=8) 00:07:42.107 Read completed with error (sct=0, sc=8) 00:07:42.107 Read completed with error (sct=0, sc=8) 00:07:42.107 Write completed with error (sct=0, sc=8) 00:07:42.107 Read completed with error (sct=0, sc=8) 00:07:42.107 Read completed with error (sct=0, sc=8) 00:07:42.107 Write completed with error (sct=0, sc=8) 00:07:42.107 Read completed with error (sct=0, sc=8) 00:07:42.107 Read completed with error (sct=0, sc=8) 00:07:42.107 Read completed with error (sct=0, sc=8) 00:07:42.107 Write completed with error (sct=0, sc=8) 00:07:42.107 Read completed with error (sct=0, sc=8) 00:07:42.107 Write completed with error (sct=0, sc=8) 00:07:42.107 Write completed with error (sct=0, sc=8) 00:07:42.107 Read completed with error (sct=0, sc=8) 00:07:42.107 Read completed with error (sct=0, sc=8) 00:07:42.107 Write completed with error (sct=0, sc=8) 00:07:42.107 Read completed with error (sct=0, sc=8) 00:07:42.107 Write completed with error (sct=0, sc=8) 00:07:42.107 Read completed with error (sct=0, sc=8) 00:07:42.107 [2024-11-06 14:12:09.669464] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500000ff80 is same with the state(6) to be set 00:07:42.107 Read completed with error (sct=0, sc=8) 00:07:42.107 Read completed with error (sct=0, sc=8) 00:07:42.107 Write completed with error (sct=0, sc=8) 00:07:42.107 Read completed with error (sct=0, sc=8) 00:07:42.107 Write completed with error (sct=0, sc=8) 00:07:42.107 Read completed with error (sct=0, sc=8) 00:07:42.107 Read completed with error (sct=0, sc=8) 00:07:42.107 Read completed with error (sct=0, sc=8) 00:07:42.107 Write completed with error (sct=0, sc=8) 00:07:42.107 Write completed with error (sct=0, sc=8) 00:07:42.107 Write completed with error (sct=0, sc=8) 00:07:42.107 Read completed with error (sct=0, sc=8) 00:07:42.107 Write completed with error (sct=0, sc=8) 00:07:42.107 Read completed with error (sct=0, sc=8) 00:07:42.107 Read completed with error (sct=0, sc=8) 00:07:42.107 Read completed with error (sct=0, sc=8) 00:07:42.107 Read completed with error (sct=0, sc=8) 00:07:42.107 Read completed with error (sct=0, sc=8) 00:07:42.107 Read completed with error (sct=0, sc=8) 00:07:42.107 Read completed with error (sct=0, sc=8) 00:07:42.107 Write completed with error (sct=0, sc=8) 00:07:42.107 Read completed with error (sct=0, sc=8) 00:07:42.107 Read completed with error (sct=0, sc=8) 00:07:42.107 Read completed with error (sct=0, sc=8) 00:07:42.107 Read completed with error (sct=0, sc=8) 00:07:42.107 Read completed with error (sct=0, sc=8) 00:07:42.107 Write completed with error (sct=0, sc=8) 00:07:42.107 Write completed with error (sct=0, sc=8) 00:07:42.107 Read completed with error (sct=0, sc=8) 00:07:42.107 Read completed with error (sct=0, sc=8) 00:07:42.107 Write completed with error (sct=0, sc=8) 00:07:42.107 Read completed with error (sct=0, sc=8) 00:07:42.107 Read completed with error (sct=0, sc=8) 00:07:42.107 Read completed with error (sct=0, sc=8) 00:07:42.107 Read completed with error (sct=0, sc=8) 00:07:42.107 Read completed with error (sct=0, sc=8) 00:07:42.107 Read completed with error (sct=0, sc=8) 00:07:42.107 Read completed with error (sct=0, sc=8) 00:07:42.107 Read completed with error (sct=0, sc=8) 00:07:42.107 Read completed with error (sct=0, sc=8) 00:07:42.107 Read completed with error (sct=0, sc=8) 00:07:43.041 [2024-11-06 14:12:10.639810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500000f300 is same with the state(6) to be set 00:07:43.041 Read completed with error (sct=0, sc=8) 00:07:43.041 Read completed with error (sct=0, sc=8) 00:07:43.041 Write completed with error (sct=0, sc=8) 00:07:43.041 Read completed with error (sct=0, sc=8) 00:07:43.041 Read completed with error (sct=0, sc=8) 00:07:43.041 Read completed with error (sct=0, sc=8) 00:07:43.041 Read completed with error (sct=0, sc=8) 00:07:43.041 Read completed with error (sct=0, sc=8) 00:07:43.041 Read completed with error (sct=0, sc=8) 00:07:43.041 Read completed with error (sct=0, sc=8) 00:07:43.041 Write completed with error (sct=0, sc=8) 00:07:43.041 Write completed with error (sct=0, sc=8) 00:07:43.041 Read completed with error (sct=0, sc=8) 00:07:43.041 Write completed with error (sct=0, sc=8) 00:07:43.041 Read completed with error (sct=0, sc=8) 00:07:43.041 Write completed with error (sct=0, sc=8) 00:07:43.041 Read completed with error (sct=0, sc=8) 00:07:43.041 Write completed with error (sct=0, sc=8) 00:07:43.041 Write completed with error (sct=0, sc=8) 00:07:43.041 Read completed with error (sct=0, sc=8) 00:07:43.041 Read completed with error (sct=0, sc=8) 00:07:43.041 Write completed with error (sct=0, sc=8) 00:07:43.041 Read completed with error (sct=0, sc=8) 00:07:43.041 Read completed with error (sct=0, sc=8) 00:07:43.041 Write completed with error (sct=0, sc=8) 00:07:43.041 Write completed with error (sct=0, sc=8) 00:07:43.041 Read completed with error (sct=0, sc=8) 00:07:43.041 [2024-11-06 14:12:10.667973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500000fa80 is same with the state(6) to be set 00:07:43.041 Write completed with error (sct=0, sc=8) 00:07:43.041 Write completed with error (sct=0, sc=8) 00:07:43.041 Write completed with error (sct=0, sc=8) 00:07:43.041 Read completed with error (sct=0, sc=8) 00:07:43.041 Read completed with error (sct=0, sc=8) 00:07:43.041 Read completed with error (sct=0, sc=8) 00:07:43.041 Write completed with error (sct=0, sc=8) 00:07:43.041 Write completed with error (sct=0, sc=8) 00:07:43.041 Read completed with error (sct=0, sc=8) 00:07:43.041 Read completed with error (sct=0, sc=8) 00:07:43.041 Read completed with error (sct=0, sc=8) 00:07:43.041 Read completed with error (sct=0, sc=8) 00:07:43.041 Read completed with error (sct=0, sc=8) 00:07:43.041 Write completed with error (sct=0, sc=8) 00:07:43.041 Read completed with error (sct=0, sc=8) 00:07:43.041 Read completed with error (sct=0, sc=8) 00:07:43.041 Read completed with error (sct=0, sc=8) 00:07:43.041 Write completed with error (sct=0, sc=8) 00:07:43.041 Write completed with error (sct=0, sc=8) 00:07:43.041 Read completed with error (sct=0, sc=8) 00:07:43.041 Write completed with error (sct=0, sc=8) 00:07:43.041 Read completed with error (sct=0, sc=8) 00:07:43.041 Read completed with error (sct=0, sc=8) 00:07:43.041 Read completed with error (sct=0, sc=8) 00:07:43.041 Read completed with error (sct=0, sc=8) 00:07:43.041 Read completed with error (sct=0, sc=8) 00:07:43.041 Read completed with error (sct=0, sc=8) 00:07:43.041 [2024-11-06 14:12:10.668613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001fe80 is same with the state(6) to be set 00:07:43.041 Write completed with error (sct=0, sc=8) 00:07:43.041 Read completed with error (sct=0, sc=8) 00:07:43.041 Read completed with error (sct=0, sc=8) 00:07:43.041 Read completed with error (sct=0, sc=8) 00:07:43.041 Read completed with error (sct=0, sc=8) 00:07:43.041 Write completed with error (sct=0, sc=8) 00:07:43.041 Read completed with error (sct=0, sc=8) 00:07:43.041 Read completed with error (sct=0, sc=8) 00:07:43.041 Read completed with error (sct=0, sc=8) 00:07:43.041 Read completed with error (sct=0, sc=8) 00:07:43.041 Read completed with error (sct=0, sc=8) 00:07:43.041 Read completed with error (sct=0, sc=8) 00:07:43.041 Read completed with error (sct=0, sc=8) 00:07:43.041 Read completed with error (sct=0, sc=8) 00:07:43.041 Read completed with error (sct=0, sc=8) 00:07:43.041 Write completed with error (sct=0, sc=8) 00:07:43.041 Write completed with error (sct=0, sc=8) 00:07:43.041 Read completed with error (sct=0, sc=8) 00:07:43.041 Read completed with error (sct=0, sc=8) 00:07:43.041 Read completed with error (sct=0, sc=8) 00:07:43.041 Read completed with error (sct=0, sc=8) 00:07:43.041 Write completed with error (sct=0, sc=8) 00:07:43.042 [2024-11-06 14:12:10.669705] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000010200 is same with the state(6) to be set 00:07:43.042 Write completed with error (sct=0, sc=8) 00:07:43.042 Read completed with error (sct=0, sc=8) 00:07:43.042 Read completed with error (sct=0, sc=8) 00:07:43.042 Read completed with error (sct=0, sc=8) 00:07:43.042 Write completed with error (sct=0, sc=8) 00:07:43.042 Read completed with error (sct=0, sc=8) 00:07:43.042 Read completed with error (sct=0, sc=8) 00:07:43.042 Read completed with error (sct=0, sc=8) 00:07:43.042 Read completed with error (sct=0, sc=8) 00:07:43.042 Write completed with error (sct=0, sc=8) 00:07:43.042 Read completed with error (sct=0, sc=8) 00:07:43.042 Read completed with error (sct=0, sc=8) 00:07:43.042 Read completed with error (sct=0, sc=8) 00:07:43.042 Read completed with error (sct=0, sc=8) 00:07:43.042 Read completed with error (sct=0, sc=8) 00:07:43.042 Read completed with error (sct=0, sc=8) 00:07:43.042 Read completed with error (sct=0, sc=8) 00:07:43.042 Read completed with error (sct=0, sc=8) 00:07:43.042 Read completed with error (sct=0, sc=8) 00:07:43.042 Read completed with error (sct=0, sc=8) 00:07:43.042 Read completed with error (sct=0, sc=8) 00:07:43.042 Write completed with error (sct=0, sc=8) 00:07:43.042 [2024-11-06 14:12:10.669920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000010700 is same with the state(6) to be set 00:07:43.042 14:12:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.042 14:12:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:43.042 14:12:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 65661 00:07:43.042 14:12:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:43.042 Initializing NVMe Controllers 00:07:43.042 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:07:43.042 Controller IO queue size 128, less than required. 00:07:43.042 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:43.042 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:43.042 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:43.042 Initialization complete. Launching workers. 00:07:43.042 ======================================================== 00:07:43.042 Latency(us) 00:07:43.042 Device Information : IOPS MiB/s Average min max 00:07:43.042 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 174.77 0.09 886147.55 1221.40 1012630.67 00:07:43.042 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 165.34 0.08 906797.24 1271.98 1012968.28 00:07:43.042 ======================================================== 00:07:43.042 Total : 340.10 0.17 896186.01 1221.40 1012968.28 00:07:43.042 00:07:43.042 [2024-11-06 14:12:10.673675] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500000f300 (9): Bad file descriptor 00:07:43.042 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:43.607 14:12:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:43.607 14:12:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 65661 00:07:43.607 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (65661) - No such process 00:07:43.607 14:12:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 65661 00:07:43.607 14:12:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:07:43.607 14:12:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 65661 00:07:43.608 14:12:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:07:43.608 14:12:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:43.608 14:12:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:07:43.608 14:12:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:43.608 14:12:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 65661 00:07:43.608 14:12:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:07:43.608 14:12:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:43.608 14:12:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:43.608 14:12:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:43.608 14:12:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:43.608 14:12:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.608 14:12:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:43.608 14:12:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.608 14:12:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:07:43.608 14:12:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.608 14:12:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:43.608 [2024-11-06 14:12:11.191506] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:43.608 14:12:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.608 14:12:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:43.608 14:12:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.608 14:12:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:43.608 14:12:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.608 14:12:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=65707 00:07:43.608 14:12:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:43.608 14:12:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65707 00:07:43.608 14:12:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:43.608 14:12:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:43.867 [2024-11-06 14:12:11.404990] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:44.124 14:12:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:44.124 14:12:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65707 00:07:44.124 14:12:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:44.707 14:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:44.707 14:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65707 00:07:44.707 14:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:45.309 14:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:45.309 14:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65707 00:07:45.309 14:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:45.875 14:12:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:45.875 14:12:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65707 00:07:45.875 14:12:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:46.133 14:12:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:46.133 14:12:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65707 00:07:46.133 14:12:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:46.704 14:12:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:46.704 14:12:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65707 00:07:46.704 14:12:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:46.964 Initializing NVMe Controllers 00:07:46.964 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:07:46.964 Controller IO queue size 128, less than required. 00:07:46.964 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:46.964 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:46.964 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:46.964 Initialization complete. Launching workers. 00:07:46.964 ======================================================== 00:07:46.964 Latency(us) 00:07:46.964 Device Information : IOPS MiB/s Average min max 00:07:46.964 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003833.70 1000096.77 1009627.38 00:07:46.964 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004070.71 1000094.18 1041798.35 00:07:46.964 ======================================================== 00:07:46.964 Total : 256.00 0.12 1003952.20 1000094.18 1041798.35 00:07:46.964 00:07:47.222 14:12:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:47.222 14:12:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65707 00:07:47.222 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (65707) - No such process 00:07:47.222 14:12:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 65707 00:07:47.222 14:12:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:47.222 14:12:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:47.222 14:12:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:47.222 14:12:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:47.222 14:12:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:47.222 14:12:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:47.222 14:12:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:47.222 14:12:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:47.222 rmmod nvme_tcp 00:07:47.222 rmmod nvme_fabrics 00:07:47.222 rmmod nvme_keyring 00:07:47.222 14:12:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:47.222 14:12:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:47.222 14:12:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:47.222 14:12:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 65610 ']' 00:07:47.222 14:12:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 65610 00:07:47.222 14:12:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' -z 65610 ']' 00:07:47.222 14:12:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # kill -0 65610 00:07:47.222 14:12:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # uname 00:07:47.222 14:12:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:47.222 14:12:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65610 00:07:47.222 killing process with pid 65610 00:07:47.222 14:12:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:47.222 14:12:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:47.222 14:12:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65610' 00:07:47.222 14:12:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # kill 65610 00:07:47.222 14:12:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@976 -- # wait 65610 00:07:47.786 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:47.786 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:47.786 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:47.786 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:07:47.786 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:07:47.786 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:07:47.786 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:48.044 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:48.044 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:48.044 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:48.044 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:48.044 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:48.044 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:48.044 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:48.044 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:48.044 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:48.044 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:48.044 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:48.044 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:48.044 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:48.044 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:48.044 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:48.044 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:48.044 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:48.044 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:48.044 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:48.044 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@300 -- # return 0 00:07:48.044 00:07:48.044 real 0m9.711s 00:07:48.044 user 0m29.230s 00:07:48.044 sys 0m1.459s 00:07:48.044 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:48.044 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:48.044 ************************************ 00:07:48.044 END TEST nvmf_delete_subsystem 00:07:48.044 ************************************ 00:07:48.044 14:12:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:48.044 14:12:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:48.044 14:12:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:48.044 14:12:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:48.044 ************************************ 00:07:48.044 START TEST nvmf_host_management 00:07:48.044 ************************************ 00:07:48.044 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:48.302 * Looking for test storage... 00:07:48.302 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:48.302 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:48.302 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:48.302 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:07:48.302 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:48.302 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:48.302 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:48.302 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:48.302 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:48.302 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:48.302 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:48.302 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:48.302 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:48.303 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:48.303 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:48.303 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:48.303 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:48.303 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:48.303 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:48.303 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:48.303 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:48.303 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:48.303 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:48.303 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:48.303 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:48.303 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:48.303 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:48.303 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:48.303 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:48.303 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:48.303 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:48.303 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:48.303 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:48.303 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:48.303 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:48.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.303 --rc genhtml_branch_coverage=1 00:07:48.303 --rc genhtml_function_coverage=1 00:07:48.303 --rc genhtml_legend=1 00:07:48.303 --rc geninfo_all_blocks=1 00:07:48.303 --rc geninfo_unexecuted_blocks=1 00:07:48.303 00:07:48.303 ' 00:07:48.303 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:48.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.303 --rc genhtml_branch_coverage=1 00:07:48.303 --rc genhtml_function_coverage=1 00:07:48.303 --rc genhtml_legend=1 00:07:48.303 --rc geninfo_all_blocks=1 00:07:48.303 --rc geninfo_unexecuted_blocks=1 00:07:48.303 00:07:48.303 ' 00:07:48.303 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:48.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.303 --rc genhtml_branch_coverage=1 00:07:48.303 --rc genhtml_function_coverage=1 00:07:48.303 --rc genhtml_legend=1 00:07:48.303 --rc geninfo_all_blocks=1 00:07:48.303 --rc geninfo_unexecuted_blocks=1 00:07:48.303 00:07:48.303 ' 00:07:48.303 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:48.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.303 --rc genhtml_branch_coverage=1 00:07:48.303 --rc genhtml_function_coverage=1 00:07:48.303 --rc genhtml_legend=1 00:07:48.303 --rc geninfo_all_blocks=1 00:07:48.303 --rc geninfo_unexecuted_blocks=1 00:07:48.303 00:07:48.303 ' 00:07:48.303 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:48.303 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:48.303 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:48.303 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:48.303 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:48.303 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:48.303 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:48.303 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:48.303 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:48.303 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:48.303 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:48.303 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:48.303 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:07:48.303 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:07:48.303 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:48.303 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:48.303 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:48.303 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:48.303 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:48.303 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:48.303 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:48.303 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:48.303 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:48.303 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.303 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.303 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.303 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:48.303 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.303 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:48.303 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:48.303 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:48.303 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:48.303 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:48.303 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:48.303 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:48.303 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:48.303 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:48.303 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:48.303 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:48.303 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:48.303 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:48.303 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:48.304 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:48.304 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:48.304 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:48.304 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:48.304 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:48.304 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:48.304 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:48.304 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:48.304 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:48.304 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:48.304 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:48.304 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:48.304 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:48.304 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:48.304 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:48.304 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:48.304 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:48.304 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:48.304 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:48.304 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:48.304 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:48.304 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:48.304 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:48.304 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:48.304 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:48.304 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:48.304 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:48.304 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:48.304 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:48.304 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:48.304 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:48.304 Cannot find device "nvmf_init_br" 00:07:48.304 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:07:48.304 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:48.304 Cannot find device "nvmf_init_br2" 00:07:48.304 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:07:48.304 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:48.304 Cannot find device "nvmf_tgt_br" 00:07:48.304 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:07:48.304 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:48.304 Cannot find device "nvmf_tgt_br2" 00:07:48.304 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:07:48.304 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:48.304 Cannot find device "nvmf_init_br" 00:07:48.304 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:07:48.304 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:48.304 Cannot find device "nvmf_init_br2" 00:07:48.304 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:07:48.304 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:48.304 Cannot find device "nvmf_tgt_br" 00:07:48.304 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:07:48.304 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:48.304 Cannot find device "nvmf_tgt_br2" 00:07:48.304 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:07:48.304 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:48.304 Cannot find device "nvmf_br" 00:07:48.304 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:07:48.304 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:48.570 Cannot find device "nvmf_init_if" 00:07:48.570 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:07:48.570 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:48.570 Cannot find device "nvmf_init_if2" 00:07:48.570 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:07:48.570 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:48.570 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:48.570 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:07:48.570 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:48.570 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:48.570 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:07:48.570 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:48.570 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:48.570 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:48.570 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:48.570 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:48.570 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:48.570 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:48.570 14:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:48.570 14:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:48.570 14:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:48.570 14:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:48.570 14:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:48.570 14:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:48.570 14:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:48.570 14:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:48.570 14:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:48.570 14:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:48.570 14:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:48.570 14:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:48.570 14:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:48.570 14:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:48.570 14:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:48.570 14:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:48.570 14:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:48.570 14:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:48.570 14:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:48.570 14:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:48.570 14:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:48.571 14:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:48.571 14:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:48.571 14:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:48.571 14:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:48.571 14:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:48.571 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:48.571 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:07:48.571 00:07:48.571 --- 10.0.0.3 ping statistics --- 00:07:48.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:48.571 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:07:48.571 14:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:48.571 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:48.571 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:07:48.571 00:07:48.571 --- 10.0.0.4 ping statistics --- 00:07:48.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:48.571 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:07:48.571 14:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:48.571 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:48.571 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:07:48.571 00:07:48.571 --- 10.0.0.1 ping statistics --- 00:07:48.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:48.571 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:07:48.571 14:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:48.571 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:48.571 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:07:48.571 00:07:48.571 --- 10.0.0.2 ping statistics --- 00:07:48.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:48.571 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:07:48.571 14:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:48.571 14:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:07:48.571 14:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:48.571 14:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:48.571 14:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:48.571 14:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:48.571 14:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:48.571 14:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:48.571 14:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:48.571 14:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:48.571 14:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:48.571 14:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:48.571 14:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:48.571 14:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:48.571 14:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:48.571 14:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=66006 00:07:48.571 14:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:48.571 14:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 66006 00:07:48.571 14:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 66006 ']' 00:07:48.571 14:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.571 14:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:48.571 14:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.571 14:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:48.571 14:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:48.829 [2024-11-06 14:12:16.205216] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:07:48.829 [2024-11-06 14:12:16.205424] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:48.829 [2024-11-06 14:12:16.357587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:48.829 [2024-11-06 14:12:16.459396] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:48.829 [2024-11-06 14:12:16.459451] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:48.829 [2024-11-06 14:12:16.459464] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:48.829 [2024-11-06 14:12:16.459471] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:48.829 [2024-11-06 14:12:16.459480] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:48.829 [2024-11-06 14:12:16.460884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:48.829 [2024-11-06 14:12:16.461026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:48.829 [2024-11-06 14:12:16.461050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:48.829 [2024-11-06 14:12:16.461081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:49.762 14:12:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:49.762 14:12:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:07:49.762 14:12:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:49.762 14:12:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:49.762 14:12:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:49.762 14:12:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:49.762 14:12:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:49.762 14:12:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.762 14:12:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:49.762 [2024-11-06 14:12:17.073175] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:49.762 14:12:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.762 14:12:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:49.762 14:12:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:49.762 14:12:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:49.762 14:12:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:49.762 14:12:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:49.762 14:12:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:49.762 14:12:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.762 14:12:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:49.762 Malloc0 00:07:49.762 [2024-11-06 14:12:17.177217] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:49.762 14:12:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.762 14:12:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:49.762 14:12:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:49.762 14:12:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:49.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:49.762 14:12:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=66082 00:07:49.762 14:12:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 66082 /var/tmp/bdevperf.sock 00:07:49.762 14:12:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 66082 ']' 00:07:49.762 14:12:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:49.762 14:12:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:49.762 14:12:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:49.762 14:12:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:49.762 14:12:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:49.762 14:12:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:49.762 14:12:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:49.762 14:12:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:49.762 14:12:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:49.762 14:12:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:49.762 14:12:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:49.762 { 00:07:49.762 "params": { 00:07:49.762 "name": "Nvme$subsystem", 00:07:49.762 "trtype": "$TEST_TRANSPORT", 00:07:49.762 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:49.762 "adrfam": "ipv4", 00:07:49.762 "trsvcid": "$NVMF_PORT", 00:07:49.762 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:49.762 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:49.762 "hdgst": ${hdgst:-false}, 00:07:49.762 "ddgst": ${ddgst:-false} 00:07:49.762 }, 00:07:49.762 "method": "bdev_nvme_attach_controller" 00:07:49.762 } 00:07:49.762 EOF 00:07:49.762 )") 00:07:49.762 14:12:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:49.762 14:12:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:49.762 14:12:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:49.762 14:12:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:49.762 "params": { 00:07:49.762 "name": "Nvme0", 00:07:49.762 "trtype": "tcp", 00:07:49.762 "traddr": "10.0.0.3", 00:07:49.762 "adrfam": "ipv4", 00:07:49.762 "trsvcid": "4420", 00:07:49.762 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:49.762 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:49.762 "hdgst": false, 00:07:49.762 "ddgst": false 00:07:49.763 }, 00:07:49.763 "method": "bdev_nvme_attach_controller" 00:07:49.763 }' 00:07:49.763 [2024-11-06 14:12:17.273727] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:07:49.763 [2024-11-06 14:12:17.273822] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66082 ] 00:07:50.020 [2024-11-06 14:12:17.425984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.020 [2024-11-06 14:12:17.511010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.276 Running I/O for 10 seconds... 00:07:50.533 14:12:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:50.533 14:12:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:07:50.533 14:12:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:50.533 14:12:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.533 14:12:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:50.533 14:12:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.533 14:12:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:50.533 14:12:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:50.533 14:12:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:50.533 14:12:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:50.533 14:12:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:50.533 14:12:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:50.533 14:12:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:50.533 14:12:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:50.533 14:12:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:50.533 14:12:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:50.533 14:12:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.533 14:12:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:50.790 14:12:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.790 14:12:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=771 00:07:50.790 14:12:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 771 -ge 100 ']' 00:07:50.790 14:12:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:50.790 14:12:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:50.790 14:12:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:50.790 14:12:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:50.790 14:12:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.790 14:12:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:50.790 [2024-11-06 14:12:18.199943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:114560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.790 [2024-11-06 14:12:18.199994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.790 [2024-11-06 14:12:18.200015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.790 [2024-11-06 14:12:18.200024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.790 [2024-11-06 14:12:18.200033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:106624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.790 [2024-11-06 14:12:18.200040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.790 [2024-11-06 14:12:18.200048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:106752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.790 [2024-11-06 14:12:18.200055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.790 [2024-11-06 14:12:18.200063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:106880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.790 [2024-11-06 14:12:18.200070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.790 [2024-11-06 14:12:18.200079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:107008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.790 [2024-11-06 14:12:18.200085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.790 [2024-11-06 14:12:18.200093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:107136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.790 [2024-11-06 14:12:18.200099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.790 [2024-11-06 14:12:18.200108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:107264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.790 [2024-11-06 14:12:18.200114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.790 [2024-11-06 14:12:18.200122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:107392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.791 [2024-11-06 14:12:18.200129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.791 [2024-11-06 14:12:18.200137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:107520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.791 [2024-11-06 14:12:18.200143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.791 [2024-11-06 14:12:18.200151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:107648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.791 [2024-11-06 14:12:18.200157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.791 [2024-11-06 14:12:18.200165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:107776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.791 [2024-11-06 14:12:18.200172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.791 [2024-11-06 14:12:18.200180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:107904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.791 [2024-11-06 14:12:18.200186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.791 [2024-11-06 14:12:18.200194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:108032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.791 [2024-11-06 14:12:18.200204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.791 [2024-11-06 14:12:18.200212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:108160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.791 [2024-11-06 14:12:18.200218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.791 [2024-11-06 14:12:18.200226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:108288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.791 [2024-11-06 14:12:18.200233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.791 [2024-11-06 14:12:18.200241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:108416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.791 [2024-11-06 14:12:18.200247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.791 [2024-11-06 14:12:18.200255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:108544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.791 [2024-11-06 14:12:18.200262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.791 [2024-11-06 14:12:18.200270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:108672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.791 [2024-11-06 14:12:18.200276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.791 [2024-11-06 14:12:18.200285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:108800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.791 [2024-11-06 14:12:18.200291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.791 [2024-11-06 14:12:18.200299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:108928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.791 [2024-11-06 14:12:18.200305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.791 [2024-11-06 14:12:18.200314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:109056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.791 [2024-11-06 14:12:18.200320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.791 [2024-11-06 14:12:18.200329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:109184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.791 [2024-11-06 14:12:18.200335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.791 [2024-11-06 14:12:18.200343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:109312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.791 [2024-11-06 14:12:18.200350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.791 [2024-11-06 14:12:18.200358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:109440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.791 [2024-11-06 14:12:18.200364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.791 [2024-11-06 14:12:18.200372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:109568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.791 [2024-11-06 14:12:18.200378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.791 [2024-11-06 14:12:18.200386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:109696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.791 [2024-11-06 14:12:18.200393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.791 [2024-11-06 14:12:18.200401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:109824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.791 [2024-11-06 14:12:18.200407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.791 [2024-11-06 14:12:18.200415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:109952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.791 [2024-11-06 14:12:18.200421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.791 [2024-11-06 14:12:18.200429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:110080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.791 [2024-11-06 14:12:18.200437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.791 [2024-11-06 14:12:18.200445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:110208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.791 [2024-11-06 14:12:18.200452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.791 [2024-11-06 14:12:18.200460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:110336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.791 [2024-11-06 14:12:18.200466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.791 [2024-11-06 14:12:18.200475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:110464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.791 [2024-11-06 14:12:18.200481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.791 [2024-11-06 14:12:18.200489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:110592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.791 [2024-11-06 14:12:18.200495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.791 [2024-11-06 14:12:18.200503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:110720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.791 [2024-11-06 14:12:18.200510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.791 [2024-11-06 14:12:18.200521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:110848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.791 [2024-11-06 14:12:18.200527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.791 [2024-11-06 14:12:18.200535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:110976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.791 [2024-11-06 14:12:18.200542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.791 [2024-11-06 14:12:18.200550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:111104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.791 [2024-11-06 14:12:18.200557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.791 [2024-11-06 14:12:18.200564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:111232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.791 [2024-11-06 14:12:18.200571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.791 [2024-11-06 14:12:18.200590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:111360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.791 [2024-11-06 14:12:18.200597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.791 [2024-11-06 14:12:18.200605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:111488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.791 [2024-11-06 14:12:18.200611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.791 [2024-11-06 14:12:18.200619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:111616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.791 [2024-11-06 14:12:18.200625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.791 [2024-11-06 14:12:18.200633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:111744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.791 [2024-11-06 14:12:18.200640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.791 [2024-11-06 14:12:18.200648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:111872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.791 [2024-11-06 14:12:18.200654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.791 [2024-11-06 14:12:18.200662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:112000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.791 [2024-11-06 14:12:18.200670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.791 [2024-11-06 14:12:18.200678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:112128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.791 [2024-11-06 14:12:18.200685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.791 [2024-11-06 14:12:18.200693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:112256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.791 [2024-11-06 14:12:18.200700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.791 [2024-11-06 14:12:18.200708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:112384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.791 [2024-11-06 14:12:18.200715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.791 [2024-11-06 14:12:18.200723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:112512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.791 [2024-11-06 14:12:18.200730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.791 [2024-11-06 14:12:18.200738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:112640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.791 [2024-11-06 14:12:18.200744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.791 [2024-11-06 14:12:18.200752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:112768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.791 [2024-11-06 14:12:18.200759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.791 [2024-11-06 14:12:18.200768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:112896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.791 [2024-11-06 14:12:18.200775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.791 [2024-11-06 14:12:18.200783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:113024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.791 [2024-11-06 14:12:18.200789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.791 [2024-11-06 14:12:18.200798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:113152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.791 [2024-11-06 14:12:18.200804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.791 [2024-11-06 14:12:18.200812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:113280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.791 [2024-11-06 14:12:18.200819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.791 [2024-11-06 14:12:18.200827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:113408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.791 [2024-11-06 14:12:18.200833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.791 [2024-11-06 14:12:18.200841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:113536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.791 [2024-11-06 14:12:18.200848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.791 [2024-11-06 14:12:18.200856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:113664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.791 [2024-11-06 14:12:18.200862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.791 [2024-11-06 14:12:18.200871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:113792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.791 [2024-11-06 14:12:18.200877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.791 [2024-11-06 14:12:18.200884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:113920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.791 [2024-11-06 14:12:18.200891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.791 [2024-11-06 14:12:18.200899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:114048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.791 [2024-11-06 14:12:18.200905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.791 [2024-11-06 14:12:18.200913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:114176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.791 [2024-11-06 14:12:18.200920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.791 [2024-11-06 14:12:18.200928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:114304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.792 [2024-11-06 14:12:18.200934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.792 [2024-11-06 14:12:18.200943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:114432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.792 [2024-11-06 14:12:18.200960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.792 [2024-11-06 14:12:18.200968] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b780 is same with the state(6) to be set 00:07:50.792 [2024-11-06 14:12:18.202126] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:50.792 task offset: 114560 on job bdev=Nvme0n1 fails 00:07:50.792 00:07:50.792 Latency(us) 00:07:50.792 [2024-11-06T14:12:18.427Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:50.792 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:50.792 Job: Nvme0n1 ended in about 0.42 seconds with error 00:07:50.792 Verification LBA range: start 0x0 length 0x400 00:07:50.792 Nvme0n1 : 0.42 1986.47 124.15 152.81 0.00 29158.85 5545.35 28029.24 00:07:50.792 [2024-11-06T14:12:18.427Z] =================================================================================================================== 00:07:50.792 [2024-11-06T14:12:18.427Z] Total : 1986.47 124.15 152.81 0.00 29158.85 5545.35 28029.24 00:07:50.792 14:12:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.792 14:12:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:50.792 14:12:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.792 14:12:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:50.792 [2024-11-06 14:12:18.205374] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:50.792 [2024-11-06 14:12:18.205415] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:07:50.792 [2024-11-06 14:12:18.209637] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:07:50.792 14:12:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.792 14:12:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:51.721 14:12:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 66082 00:07:51.721 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (66082) - No such process 00:07:51.721 14:12:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:51.721 14:12:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:51.721 14:12:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:51.721 14:12:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:51.721 14:12:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:51.721 14:12:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:51.721 14:12:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:51.721 14:12:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:51.721 { 00:07:51.721 "params": { 00:07:51.721 "name": "Nvme$subsystem", 00:07:51.721 "trtype": "$TEST_TRANSPORT", 00:07:51.721 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:51.721 "adrfam": "ipv4", 00:07:51.721 "trsvcid": "$NVMF_PORT", 00:07:51.721 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:51.721 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:51.721 "hdgst": ${hdgst:-false}, 00:07:51.721 "ddgst": ${ddgst:-false} 00:07:51.721 }, 00:07:51.721 "method": "bdev_nvme_attach_controller" 00:07:51.721 } 00:07:51.721 EOF 00:07:51.721 )") 00:07:51.721 14:12:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:51.722 14:12:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:51.722 14:12:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:51.722 14:12:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:51.722 "params": { 00:07:51.722 "name": "Nvme0", 00:07:51.722 "trtype": "tcp", 00:07:51.722 "traddr": "10.0.0.3", 00:07:51.722 "adrfam": "ipv4", 00:07:51.722 "trsvcid": "4420", 00:07:51.722 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:51.722 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:51.722 "hdgst": false, 00:07:51.722 "ddgst": false 00:07:51.722 }, 00:07:51.722 "method": "bdev_nvme_attach_controller" 00:07:51.722 }' 00:07:51.722 [2024-11-06 14:12:19.299742] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:07:51.722 [2024-11-06 14:12:19.299872] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66129 ] 00:07:51.980 [2024-11-06 14:12:19.457700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.980 [2024-11-06 14:12:19.539655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.238 Running I/O for 1 seconds... 00:07:53.608 2286.00 IOPS, 142.88 MiB/s 00:07:53.608 Latency(us) 00:07:53.608 [2024-11-06T14:12:21.244Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:53.609 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:53.609 Verification LBA range: start 0x0 length 0x400 00:07:53.609 Nvme0n1 : 1.02 2317.32 144.83 0.00 0.00 27156.49 3755.72 24802.86 00:07:53.609 [2024-11-06T14:12:21.244Z] =================================================================================================================== 00:07:53.609 [2024-11-06T14:12:21.244Z] Total : 2317.32 144.83 0.00 0.00 27156.49 3755.72 24802.86 00:07:53.865 14:12:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:53.865 14:12:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:53.865 14:12:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:07:53.865 14:12:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:53.865 14:12:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:53.865 14:12:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:53.865 14:12:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:53.865 14:12:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:53.865 14:12:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:53.865 14:12:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:53.865 14:12:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:53.865 rmmod nvme_tcp 00:07:53.865 rmmod nvme_fabrics 00:07:53.865 rmmod nvme_keyring 00:07:53.865 14:12:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:53.865 14:12:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:53.865 14:12:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:53.865 14:12:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 66006 ']' 00:07:53.865 14:12:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 66006 00:07:53.865 14:12:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 66006 ']' 00:07:53.865 14:12:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 66006 00:07:53.865 14:12:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:07:53.865 14:12:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:53.865 14:12:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 66006 00:07:53.865 killing process with pid 66006 00:07:53.865 14:12:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:07:53.865 14:12:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:07:53.865 14:12:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 66006' 00:07:53.865 14:12:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 66006 00:07:53.865 14:12:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 66006 00:07:54.797 [2024-11-06 14:12:22.116937] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:54.797 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:54.797 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:54.797 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:54.797 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:54.797 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:07:54.797 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:54.797 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:07:54.797 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:54.797 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:54.797 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:54.797 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:54.797 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:54.797 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:54.797 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:54.797 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:54.797 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:54.797 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:54.797 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:54.797 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:54.797 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:54.797 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:54.797 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:54.797 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:54.797 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:54.797 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:54.797 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:54.797 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:07:54.797 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:54.797 ************************************ 00:07:54.797 END TEST nvmf_host_management 00:07:54.797 ************************************ 00:07:54.797 00:07:54.797 real 0m6.723s 00:07:54.797 user 0m25.855s 00:07:54.797 sys 0m1.227s 00:07:54.797 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:54.797 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:54.797 14:12:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:54.797 14:12:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:54.797 14:12:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:54.797 14:12:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:54.797 ************************************ 00:07:54.797 START TEST nvmf_lvol 00:07:54.797 ************************************ 00:07:54.797 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:55.056 * Looking for test storage... 00:07:55.056 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:55.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.056 --rc genhtml_branch_coverage=1 00:07:55.056 --rc genhtml_function_coverage=1 00:07:55.056 --rc genhtml_legend=1 00:07:55.056 --rc geninfo_all_blocks=1 00:07:55.056 --rc geninfo_unexecuted_blocks=1 00:07:55.056 00:07:55.056 ' 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:55.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.056 --rc genhtml_branch_coverage=1 00:07:55.056 --rc genhtml_function_coverage=1 00:07:55.056 --rc genhtml_legend=1 00:07:55.056 --rc geninfo_all_blocks=1 00:07:55.056 --rc geninfo_unexecuted_blocks=1 00:07:55.056 00:07:55.056 ' 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:55.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.056 --rc genhtml_branch_coverage=1 00:07:55.056 --rc genhtml_function_coverage=1 00:07:55.056 --rc genhtml_legend=1 00:07:55.056 --rc geninfo_all_blocks=1 00:07:55.056 --rc geninfo_unexecuted_blocks=1 00:07:55.056 00:07:55.056 ' 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:55.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.056 --rc genhtml_branch_coverage=1 00:07:55.056 --rc genhtml_function_coverage=1 00:07:55.056 --rc genhtml_legend=1 00:07:55.056 --rc geninfo_all_blocks=1 00:07:55.056 --rc geninfo_unexecuted_blocks=1 00:07:55.056 00:07:55.056 ' 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:55.056 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:55.056 Cannot find device "nvmf_init_br" 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:55.056 Cannot find device "nvmf_init_br2" 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:55.056 Cannot find device "nvmf_tgt_br" 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:55.056 Cannot find device "nvmf_tgt_br2" 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:55.056 Cannot find device "nvmf_init_br" 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:55.056 Cannot find device "nvmf_init_br2" 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:55.056 Cannot find device "nvmf_tgt_br" 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:55.056 Cannot find device "nvmf_tgt_br2" 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:55.056 Cannot find device "nvmf_br" 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:55.056 Cannot find device "nvmf_init_if" 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:07:55.056 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:55.316 Cannot find device "nvmf_init_if2" 00:07:55.316 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:07:55.316 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:55.316 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:55.316 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:07:55.316 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:55.316 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:55.316 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:07:55.316 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:55.316 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:55.316 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:55.316 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:55.316 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:55.316 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:55.316 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:55.316 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:55.317 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:55.317 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:55.317 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:55.317 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:55.317 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:55.317 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:55.317 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:55.317 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:55.317 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:55.317 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:55.317 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:55.317 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:55.317 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:55.317 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:55.317 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:55.317 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:55.317 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:55.317 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:55.317 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:55.317 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:55.317 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:55.317 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:55.317 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:55.317 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:55.317 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:55.317 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:55.317 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:07:55.317 00:07:55.317 --- 10.0.0.3 ping statistics --- 00:07:55.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:55.317 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:07:55.317 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:55.317 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:55.317 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:07:55.317 00:07:55.317 --- 10.0.0.4 ping statistics --- 00:07:55.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:55.317 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:07:55.317 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:55.317 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:55.317 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 00:07:55.317 00:07:55.317 --- 10.0.0.1 ping statistics --- 00:07:55.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:55.317 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 00:07:55.317 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:55.317 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:55.317 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.033 ms 00:07:55.317 00:07:55.317 --- 10.0.0.2 ping statistics --- 00:07:55.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:55.317 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:07:55.317 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:55.317 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:07:55.317 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:55.317 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:55.317 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:55.317 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:55.317 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:55.317 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:55.317 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:55.317 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:55.317 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:55.317 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:55.317 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:55.317 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=66409 00:07:55.317 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 66409 00:07:55.317 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 66409 ']' 00:07:55.317 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:55.317 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:55.317 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:55.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:55.317 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:55.317 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:55.317 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:55.317 [2024-11-06 14:12:22.933330] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:07:55.317 [2024-11-06 14:12:22.933437] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:55.615 [2024-11-06 14:12:23.097721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:55.615 [2024-11-06 14:12:23.238350] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:55.615 [2024-11-06 14:12:23.238418] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:55.615 [2024-11-06 14:12:23.238438] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:55.615 [2024-11-06 14:12:23.238452] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:55.615 [2024-11-06 14:12:23.238468] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:55.615 [2024-11-06 14:12:23.240460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:55.615 [2024-11-06 14:12:23.240557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.615 [2024-11-06 14:12:23.240575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:56.182 14:12:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:56.182 14:12:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:07:56.182 14:12:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:56.182 14:12:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:56.182 14:12:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:56.182 14:12:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:56.182 14:12:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:56.440 [2024-11-06 14:12:24.008452] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:56.440 14:12:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:57.031 14:12:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:57.031 14:12:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:57.031 14:12:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:57.031 14:12:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:57.290 14:12:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:57.548 14:12:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=79c7b050-7449-4df1-a4ce-8fc92764fcac 00:07:57.548 14:12:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 79c7b050-7449-4df1-a4ce-8fc92764fcac lvol 20 00:07:57.808 14:12:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=eed2bf54-300a-4e17-a4d8-b6f13e3080c6 00:07:57.808 14:12:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:58.065 14:12:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 eed2bf54-300a-4e17-a4d8-b6f13e3080c6 00:07:58.351 14:12:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:07:58.351 [2024-11-06 14:12:25.938345] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:58.351 14:12:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:58.609 14:12:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=66552 00:07:58.609 14:12:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:58.609 14:12:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:59.542 14:12:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot eed2bf54-300a-4e17-a4d8-b6f13e3080c6 MY_SNAPSHOT 00:07:59.800 14:12:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=d0acc388-5201-4559-9445-80e270ea7f02 00:07:59.800 14:12:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize eed2bf54-300a-4e17-a4d8-b6f13e3080c6 30 00:08:00.064 14:12:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone d0acc388-5201-4559-9445-80e270ea7f02 MY_CLONE 00:08:00.321 14:12:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=e6e896ba-aee3-4e11-b38f-50bef0715ab5 00:08:00.321 14:12:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate e6e896ba-aee3-4e11-b38f-50bef0715ab5 00:08:00.886 14:12:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 66552 00:08:08.990 Initializing NVMe Controllers 00:08:08.990 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:08:08.990 Controller IO queue size 128, less than required. 00:08:08.990 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:08.990 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:08.990 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:08.990 Initialization complete. Launching workers. 00:08:08.990 ======================================================== 00:08:08.990 Latency(us) 00:08:08.990 Device Information : IOPS MiB/s Average min max 00:08:08.990 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 14681.60 57.35 8720.87 250.73 105711.64 00:08:08.990 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 14669.40 57.30 8726.86 4478.39 83952.76 00:08:08.990 ======================================================== 00:08:08.990 Total : 29351.00 114.65 8723.86 250.73 105711.64 00:08:08.990 00:08:08.990 14:12:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:09.246 14:12:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete eed2bf54-300a-4e17-a4d8-b6f13e3080c6 00:08:09.503 14:12:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 79c7b050-7449-4df1-a4ce-8fc92764fcac 00:08:09.760 14:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:09.760 14:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:09.760 14:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:09.760 14:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:09.760 14:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:09.760 14:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:09.760 14:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:09.760 14:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:09.760 14:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:09.760 rmmod nvme_tcp 00:08:09.760 rmmod nvme_fabrics 00:08:09.760 rmmod nvme_keyring 00:08:09.760 14:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:09.760 14:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:09.760 14:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:09.760 14:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 66409 ']' 00:08:09.760 14:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 66409 00:08:09.760 14:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 66409 ']' 00:08:09.760 14:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 66409 00:08:09.760 14:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:08:09.760 14:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:09.760 14:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 66409 00:08:09.760 killing process with pid 66409 00:08:09.760 14:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:09.760 14:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:09.761 14:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 66409' 00:08:09.761 14:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 66409 00:08:09.761 14:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 66409 00:08:10.692 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:10.692 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:10.692 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:10.692 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:10.692 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:10.692 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:08:10.692 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:08:10.692 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:10.692 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:10.692 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:10.692 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:10.692 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:10.692 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:10.692 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:10.692 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:10.692 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:10.692 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:10.692 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:10.692 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:10.692 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:10.692 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:10.692 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:10.692 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:10.692 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:10.693 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:10.693 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:10.693 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:08:10.693 ************************************ 00:08:10.693 END TEST nvmf_lvol 00:08:10.693 ************************************ 00:08:10.693 00:08:10.693 real 0m15.826s 00:08:10.693 user 1m5.324s 00:08:10.693 sys 0m3.082s 00:08:10.693 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:10.693 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:10.693 14:12:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:10.693 14:12:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:10.693 14:12:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:10.693 14:12:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:10.693 ************************************ 00:08:10.693 START TEST nvmf_lvs_grow 00:08:10.693 ************************************ 00:08:10.693 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:10.951 * Looking for test storage... 00:08:10.951 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:10.951 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:10.951 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:08:10.951 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:10.951 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:10.951 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:10.951 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:10.951 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:10.951 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:10.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.952 --rc genhtml_branch_coverage=1 00:08:10.952 --rc genhtml_function_coverage=1 00:08:10.952 --rc genhtml_legend=1 00:08:10.952 --rc geninfo_all_blocks=1 00:08:10.952 --rc geninfo_unexecuted_blocks=1 00:08:10.952 00:08:10.952 ' 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:10.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.952 --rc genhtml_branch_coverage=1 00:08:10.952 --rc genhtml_function_coverage=1 00:08:10.952 --rc genhtml_legend=1 00:08:10.952 --rc geninfo_all_blocks=1 00:08:10.952 --rc geninfo_unexecuted_blocks=1 00:08:10.952 00:08:10.952 ' 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:10.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.952 --rc genhtml_branch_coverage=1 00:08:10.952 --rc genhtml_function_coverage=1 00:08:10.952 --rc genhtml_legend=1 00:08:10.952 --rc geninfo_all_blocks=1 00:08:10.952 --rc geninfo_unexecuted_blocks=1 00:08:10.952 00:08:10.952 ' 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:10.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.952 --rc genhtml_branch_coverage=1 00:08:10.952 --rc genhtml_function_coverage=1 00:08:10.952 --rc genhtml_legend=1 00:08:10.952 --rc geninfo_all_blocks=1 00:08:10.952 --rc geninfo_unexecuted_blocks=1 00:08:10.952 00:08:10.952 ' 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:10.952 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:10.952 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:10.953 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:10.953 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:10.953 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:10.953 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:10.953 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:10.953 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:10.953 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:10.953 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:10.953 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:10.953 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:10.953 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:10.953 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:10.953 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:10.953 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:10.953 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:10.953 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:10.953 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:10.953 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:10.953 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:10.953 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:10.953 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:10.953 Cannot find device "nvmf_init_br" 00:08:10.953 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:08:10.953 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:10.953 Cannot find device "nvmf_init_br2" 00:08:10.953 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:08:10.953 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:10.953 Cannot find device "nvmf_tgt_br" 00:08:10.953 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:08:10.953 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:10.953 Cannot find device "nvmf_tgt_br2" 00:08:10.953 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:08:10.953 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:10.953 Cannot find device "nvmf_init_br" 00:08:10.953 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:08:10.953 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:10.953 Cannot find device "nvmf_init_br2" 00:08:10.953 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:08:10.953 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:10.953 Cannot find device "nvmf_tgt_br" 00:08:10.953 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:08:10.953 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:10.953 Cannot find device "nvmf_tgt_br2" 00:08:10.953 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:08:10.953 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:10.953 Cannot find device "nvmf_br" 00:08:10.953 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:08:10.953 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:10.953 Cannot find device "nvmf_init_if" 00:08:10.953 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:08:10.953 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:10.953 Cannot find device "nvmf_init_if2" 00:08:10.953 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:08:10.953 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:10.953 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:10.953 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:08:10.953 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:11.211 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:11.211 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:08:11.211 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:11.211 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:11.211 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:11.211 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:11.211 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:11.211 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:11.211 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:11.211 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:11.211 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:11.211 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:11.211 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:11.211 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:11.211 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:11.211 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:11.211 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:11.211 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:11.211 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:11.211 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:11.211 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:11.211 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:11.211 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:11.211 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:11.211 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:11.211 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:11.211 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:11.211 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:11.211 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:11.211 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:11.211 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:11.211 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:11.211 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:11.211 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:11.211 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:11.211 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:11.211 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 00:08:11.211 00:08:11.211 --- 10.0.0.3 ping statistics --- 00:08:11.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:11.211 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:08:11.211 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:11.211 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:11.211 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:08:11.211 00:08:11.211 --- 10.0.0.4 ping statistics --- 00:08:11.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:11.211 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:08:11.211 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:11.211 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:11.211 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:08:11.211 00:08:11.211 --- 10.0.0.1 ping statistics --- 00:08:11.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:11.211 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:08:11.211 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:11.211 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:11.211 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.041 ms 00:08:11.211 00:08:11.211 --- 10.0.0.2 ping statistics --- 00:08:11.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:11.211 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:08:11.211 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:11.211 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:08:11.211 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:11.211 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:11.211 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:11.211 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:11.211 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:11.211 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:11.211 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:11.211 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:11.211 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:11.211 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:11.211 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:11.211 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=66981 00:08:11.211 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 66981 00:08:11.211 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 66981 ']' 00:08:11.211 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:11.211 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:11.211 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:11.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:11.211 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:11.211 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:11.211 14:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:11.211 [2024-11-06 14:12:38.838278] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:08:11.211 [2024-11-06 14:12:38.838553] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:11.469 [2024-11-06 14:12:38.995132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.469 [2024-11-06 14:12:39.079068] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:11.469 [2024-11-06 14:12:39.079129] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:11.469 [2024-11-06 14:12:39.079143] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:11.469 [2024-11-06 14:12:39.079159] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:11.469 [2024-11-06 14:12:39.079169] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:11.469 [2024-11-06 14:12:39.079988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.072 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:12.072 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:08:12.072 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:12.072 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:12.072 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:12.072 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:12.072 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:12.335 [2024-11-06 14:12:39.874410] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:12.335 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:12.335 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:12.335 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:12.335 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:12.335 ************************************ 00:08:12.335 START TEST lvs_grow_clean 00:08:12.335 ************************************ 00:08:12.335 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:08:12.335 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:12.335 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:12.335 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:12.335 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:12.335 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:12.335 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:12.335 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:12.335 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:12.335 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:12.593 14:12:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:12.593 14:12:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:12.850 14:12:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=c7fedde9-8de6-4b84-8b51-5e627fcc3696 00:08:12.850 14:12:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:12.850 14:12:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7fedde9-8de6-4b84-8b51-5e627fcc3696 00:08:13.107 14:12:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:13.107 14:12:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:13.107 14:12:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u c7fedde9-8de6-4b84-8b51-5e627fcc3696 lvol 150 00:08:13.107 14:12:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=e5beb607-ede8-48a3-9b48-49843d1d6fcc 00:08:13.107 14:12:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:13.107 14:12:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:13.364 [2024-11-06 14:12:40.899877] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:13.364 [2024-11-06 14:12:40.899967] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:13.364 true 00:08:13.364 14:12:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7fedde9-8de6-4b84-8b51-5e627fcc3696 00:08:13.364 14:12:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:13.621 14:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:13.621 14:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:13.879 14:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e5beb607-ede8-48a3-9b48-49843d1d6fcc 00:08:14.136 14:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:08:14.136 [2024-11-06 14:12:41.736380] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:14.136 14:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:14.393 14:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=67137 00:08:14.393 14:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:14.393 14:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:14.393 14:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 67137 /var/tmp/bdevperf.sock 00:08:14.393 14:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 67137 ']' 00:08:14.393 14:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:14.393 14:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:14.393 14:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:14.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:14.393 14:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:14.393 14:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:14.665 [2024-11-06 14:12:42.041164] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:08:14.665 [2024-11-06 14:12:42.041288] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67137 ] 00:08:14.665 [2024-11-06 14:12:42.200191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.665 [2024-11-06 14:12:42.298560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:15.596 14:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:15.596 14:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:08:15.596 14:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:15.596 Nvme0n1 00:08:15.596 14:12:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:15.856 [ 00:08:15.856 { 00:08:15.856 "aliases": [ 00:08:15.856 "e5beb607-ede8-48a3-9b48-49843d1d6fcc" 00:08:15.856 ], 00:08:15.856 "assigned_rate_limits": { 00:08:15.856 "r_mbytes_per_sec": 0, 00:08:15.856 "rw_ios_per_sec": 0, 00:08:15.856 "rw_mbytes_per_sec": 0, 00:08:15.856 "w_mbytes_per_sec": 0 00:08:15.856 }, 00:08:15.856 "block_size": 4096, 00:08:15.856 "claimed": false, 00:08:15.856 "driver_specific": { 00:08:15.856 "mp_policy": "active_passive", 00:08:15.856 "nvme": [ 00:08:15.856 { 00:08:15.856 "ctrlr_data": { 00:08:15.856 "ana_reporting": false, 00:08:15.856 "cntlid": 1, 00:08:15.856 "firmware_revision": "25.01", 00:08:15.856 "model_number": "SPDK bdev Controller", 00:08:15.856 "multi_ctrlr": true, 00:08:15.856 "oacs": { 00:08:15.856 "firmware": 0, 00:08:15.856 "format": 0, 00:08:15.856 "ns_manage": 0, 00:08:15.856 "security": 0 00:08:15.856 }, 00:08:15.856 "serial_number": "SPDK0", 00:08:15.856 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:15.856 "vendor_id": "0x8086" 00:08:15.856 }, 00:08:15.856 "ns_data": { 00:08:15.856 "can_share": true, 00:08:15.856 "id": 1 00:08:15.856 }, 00:08:15.856 "trid": { 00:08:15.856 "adrfam": "IPv4", 00:08:15.856 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:15.856 "traddr": "10.0.0.3", 00:08:15.856 "trsvcid": "4420", 00:08:15.856 "trtype": "TCP" 00:08:15.856 }, 00:08:15.856 "vs": { 00:08:15.856 "nvme_version": "1.3" 00:08:15.856 } 00:08:15.856 } 00:08:15.856 ] 00:08:15.856 }, 00:08:15.856 "memory_domains": [ 00:08:15.856 { 00:08:15.856 "dma_device_id": "system", 00:08:15.856 "dma_device_type": 1 00:08:15.856 } 00:08:15.856 ], 00:08:15.856 "name": "Nvme0n1", 00:08:15.856 "num_blocks": 38912, 00:08:15.856 "numa_id": -1, 00:08:15.856 "product_name": "NVMe disk", 00:08:15.856 "supported_io_types": { 00:08:15.856 "abort": true, 00:08:15.856 "compare": true, 00:08:15.856 "compare_and_write": true, 00:08:15.856 "copy": true, 00:08:15.856 "flush": true, 00:08:15.856 "get_zone_info": false, 00:08:15.856 "nvme_admin": true, 00:08:15.856 "nvme_io": true, 00:08:15.856 "nvme_io_md": false, 00:08:15.856 "nvme_iov_md": false, 00:08:15.856 "read": true, 00:08:15.856 "reset": true, 00:08:15.856 "seek_data": false, 00:08:15.856 "seek_hole": false, 00:08:15.856 "unmap": true, 00:08:15.856 "write": true, 00:08:15.856 "write_zeroes": true, 00:08:15.857 "zcopy": false, 00:08:15.857 "zone_append": false, 00:08:15.857 "zone_management": false 00:08:15.857 }, 00:08:15.857 "uuid": "e5beb607-ede8-48a3-9b48-49843d1d6fcc", 00:08:15.857 "zoned": false 00:08:15.857 } 00:08:15.857 ] 00:08:15.857 14:12:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=67184 00:08:15.857 14:12:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:15.857 14:12:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:16.114 Running I/O for 10 seconds... 00:08:17.046 Latency(us) 00:08:17.046 [2024-11-06T14:12:44.681Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:17.046 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:17.046 Nvme0n1 : 1.00 10520.00 41.09 0.00 0.00 0.00 0.00 0.00 00:08:17.046 [2024-11-06T14:12:44.681Z] =================================================================================================================== 00:08:17.046 [2024-11-06T14:12:44.681Z] Total : 10520.00 41.09 0.00 0.00 0.00 0.00 0.00 00:08:17.046 00:08:17.978 14:12:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c7fedde9-8de6-4b84-8b51-5e627fcc3696 00:08:17.978 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:17.978 Nvme0n1 : 2.00 10123.00 39.54 0.00 0.00 0.00 0.00 0.00 00:08:17.978 [2024-11-06T14:12:45.613Z] =================================================================================================================== 00:08:17.978 [2024-11-06T14:12:45.613Z] Total : 10123.00 39.54 0.00 0.00 0.00 0.00 0.00 00:08:17.978 00:08:18.236 true 00:08:18.236 14:12:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7fedde9-8de6-4b84-8b51-5e627fcc3696 00:08:18.236 14:12:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:18.236 14:12:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:18.236 14:12:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:18.236 14:12:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 67184 00:08:19.169 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:19.169 Nvme0n1 : 3.00 10475.67 40.92 0.00 0.00 0.00 0.00 0.00 00:08:19.169 [2024-11-06T14:12:46.805Z] =================================================================================================================== 00:08:19.170 [2024-11-06T14:12:46.805Z] Total : 10475.67 40.92 0.00 0.00 0.00 0.00 0.00 00:08:19.170 00:08:20.105 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:20.105 Nvme0n1 : 4.00 10602.25 41.42 0.00 0.00 0.00 0.00 0.00 00:08:20.105 [2024-11-06T14:12:47.740Z] =================================================================================================================== 00:08:20.105 [2024-11-06T14:12:47.740Z] Total : 10602.25 41.42 0.00 0.00 0.00 0.00 0.00 00:08:20.105 00:08:21.038 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:21.038 Nvme0n1 : 5.00 10670.60 41.68 0.00 0.00 0.00 0.00 0.00 00:08:21.038 [2024-11-06T14:12:48.673Z] =================================================================================================================== 00:08:21.038 [2024-11-06T14:12:48.673Z] Total : 10670.60 41.68 0.00 0.00 0.00 0.00 0.00 00:08:21.038 00:08:21.971 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:21.971 Nvme0n1 : 6.00 10712.83 41.85 0.00 0.00 0.00 0.00 0.00 00:08:21.971 [2024-11-06T14:12:49.606Z] =================================================================================================================== 00:08:21.971 [2024-11-06T14:12:49.606Z] Total : 10712.83 41.85 0.00 0.00 0.00 0.00 0.00 00:08:21.971 00:08:22.903 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:22.903 Nvme0n1 : 7.00 10733.86 41.93 0.00 0.00 0.00 0.00 0.00 00:08:22.903 [2024-11-06T14:12:50.538Z] =================================================================================================================== 00:08:22.903 [2024-11-06T14:12:50.538Z] Total : 10733.86 41.93 0.00 0.00 0.00 0.00 0.00 00:08:22.903 00:08:24.273 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:24.273 Nvme0n1 : 8.00 10764.38 42.05 0.00 0.00 0.00 0.00 0.00 00:08:24.273 [2024-11-06T14:12:51.908Z] =================================================================================================================== 00:08:24.273 [2024-11-06T14:12:51.908Z] Total : 10764.38 42.05 0.00 0.00 0.00 0.00 0.00 00:08:24.273 00:08:25.217 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:25.217 Nvme0n1 : 9.00 10789.11 42.14 0.00 0.00 0.00 0.00 0.00 00:08:25.217 [2024-11-06T14:12:52.852Z] =================================================================================================================== 00:08:25.217 [2024-11-06T14:12:52.852Z] Total : 10789.11 42.14 0.00 0.00 0.00 0.00 0.00 00:08:25.217 00:08:26.150 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:26.150 Nvme0n1 : 10.00 10789.80 42.15 0.00 0.00 0.00 0.00 0.00 00:08:26.150 [2024-11-06T14:12:53.785Z] =================================================================================================================== 00:08:26.150 [2024-11-06T14:12:53.785Z] Total : 10789.80 42.15 0.00 0.00 0.00 0.00 0.00 00:08:26.150 00:08:26.150 00:08:26.150 Latency(us) 00:08:26.150 [2024-11-06T14:12:53.785Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:26.150 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:26.150 Nvme0n1 : 10.00 10799.29 42.18 0.00 0.00 11848.39 3806.13 116956.55 00:08:26.150 [2024-11-06T14:12:53.785Z] =================================================================================================================== 00:08:26.150 [2024-11-06T14:12:53.785Z] Total : 10799.29 42.18 0.00 0.00 11848.39 3806.13 116956.55 00:08:26.150 { 00:08:26.150 "results": [ 00:08:26.150 { 00:08:26.150 "job": "Nvme0n1", 00:08:26.150 "core_mask": "0x2", 00:08:26.150 "workload": "randwrite", 00:08:26.150 "status": "finished", 00:08:26.150 "queue_depth": 128, 00:08:26.150 "io_size": 4096, 00:08:26.150 "runtime": 10.003066, 00:08:26.150 "iops": 10799.288938011605, 00:08:26.150 "mibps": 42.18472241410783, 00:08:26.150 "io_failed": 0, 00:08:26.150 "io_timeout": 0, 00:08:26.150 "avg_latency_us": 11848.387862864924, 00:08:26.150 "min_latency_us": 3806.1292307692306, 00:08:26.150 "max_latency_us": 116956.55384615385 00:08:26.150 } 00:08:26.150 ], 00:08:26.150 "core_count": 1 00:08:26.150 } 00:08:26.150 14:12:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 67137 00:08:26.150 14:12:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 67137 ']' 00:08:26.150 14:12:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 67137 00:08:26.150 14:12:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:08:26.150 14:12:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:26.150 14:12:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67137 00:08:26.150 14:12:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:08:26.150 killing process with pid 67137 00:08:26.150 14:12:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:08:26.150 14:12:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67137' 00:08:26.150 Received shutdown signal, test time was about 10.000000 seconds 00:08:26.150 00:08:26.150 Latency(us) 00:08:26.150 [2024-11-06T14:12:53.785Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:26.150 [2024-11-06T14:12:53.785Z] =================================================================================================================== 00:08:26.150 [2024-11-06T14:12:53.785Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:26.150 14:12:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 67137 00:08:26.150 14:12:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 67137 00:08:26.716 14:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:26.716 14:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:26.974 14:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7fedde9-8de6-4b84-8b51-5e627fcc3696 00:08:26.974 14:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:27.232 14:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:27.232 14:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:27.232 14:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:27.232 [2024-11-06 14:12:54.834758] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:27.490 14:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7fedde9-8de6-4b84-8b51-5e627fcc3696 00:08:27.490 14:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:08:27.490 14:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7fedde9-8de6-4b84-8b51-5e627fcc3696 00:08:27.490 14:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:27.490 14:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:27.490 14:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:27.490 14:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:27.490 14:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:27.490 14:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:27.490 14:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:27.490 14:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:27.490 14:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7fedde9-8de6-4b84-8b51-5e627fcc3696 00:08:27.490 2024/11/06 14:12:55 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:c7fedde9-8de6-4b84-8b51-5e627fcc3696], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:08:27.490 request: 00:08:27.490 { 00:08:27.490 "method": "bdev_lvol_get_lvstores", 00:08:27.490 "params": { 00:08:27.490 "uuid": "c7fedde9-8de6-4b84-8b51-5e627fcc3696" 00:08:27.490 } 00:08:27.490 } 00:08:27.490 Got JSON-RPC error response 00:08:27.490 GoRPCClient: error on JSON-RPC call 00:08:27.490 14:12:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:08:27.490 14:12:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:27.490 14:12:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:27.490 14:12:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:27.490 14:12:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:27.748 aio_bdev 00:08:27.748 14:12:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e5beb607-ede8-48a3-9b48-49843d1d6fcc 00:08:27.748 14:12:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=e5beb607-ede8-48a3-9b48-49843d1d6fcc 00:08:27.748 14:12:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:27.748 14:12:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:08:27.748 14:12:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:27.748 14:12:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:27.748 14:12:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:28.018 14:12:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e5beb607-ede8-48a3-9b48-49843d1d6fcc -t 2000 00:08:28.275 [ 00:08:28.275 { 00:08:28.275 "aliases": [ 00:08:28.275 "lvs/lvol" 00:08:28.275 ], 00:08:28.275 "assigned_rate_limits": { 00:08:28.275 "r_mbytes_per_sec": 0, 00:08:28.275 "rw_ios_per_sec": 0, 00:08:28.275 "rw_mbytes_per_sec": 0, 00:08:28.275 "w_mbytes_per_sec": 0 00:08:28.275 }, 00:08:28.275 "block_size": 4096, 00:08:28.275 "claimed": false, 00:08:28.275 "driver_specific": { 00:08:28.275 "lvol": { 00:08:28.275 "base_bdev": "aio_bdev", 00:08:28.275 "clone": false, 00:08:28.275 "esnap_clone": false, 00:08:28.275 "lvol_store_uuid": "c7fedde9-8de6-4b84-8b51-5e627fcc3696", 00:08:28.275 "num_allocated_clusters": 38, 00:08:28.275 "snapshot": false, 00:08:28.275 "thin_provision": false 00:08:28.275 } 00:08:28.275 }, 00:08:28.275 "name": "e5beb607-ede8-48a3-9b48-49843d1d6fcc", 00:08:28.275 "num_blocks": 38912, 00:08:28.275 "product_name": "Logical Volume", 00:08:28.275 "supported_io_types": { 00:08:28.275 "abort": false, 00:08:28.275 "compare": false, 00:08:28.275 "compare_and_write": false, 00:08:28.275 "copy": false, 00:08:28.275 "flush": false, 00:08:28.275 "get_zone_info": false, 00:08:28.275 "nvme_admin": false, 00:08:28.275 "nvme_io": false, 00:08:28.275 "nvme_io_md": false, 00:08:28.275 "nvme_iov_md": false, 00:08:28.275 "read": true, 00:08:28.275 "reset": true, 00:08:28.275 "seek_data": true, 00:08:28.275 "seek_hole": true, 00:08:28.275 "unmap": true, 00:08:28.275 "write": true, 00:08:28.275 "write_zeroes": true, 00:08:28.275 "zcopy": false, 00:08:28.275 "zone_append": false, 00:08:28.275 "zone_management": false 00:08:28.275 }, 00:08:28.275 "uuid": "e5beb607-ede8-48a3-9b48-49843d1d6fcc", 00:08:28.275 "zoned": false 00:08:28.275 } 00:08:28.275 ] 00:08:28.275 14:12:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:08:28.275 14:12:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7fedde9-8de6-4b84-8b51-5e627fcc3696 00:08:28.275 14:12:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:28.275 14:12:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:28.275 14:12:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7fedde9-8de6-4b84-8b51-5e627fcc3696 00:08:28.275 14:12:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:28.531 14:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:28.531 14:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete e5beb607-ede8-48a3-9b48-49843d1d6fcc 00:08:28.788 14:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c7fedde9-8de6-4b84-8b51-5e627fcc3696 00:08:29.046 14:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:29.304 14:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:29.868 00:08:29.868 real 0m17.295s 00:08:29.868 user 0m16.668s 00:08:29.868 sys 0m1.694s 00:08:29.868 ************************************ 00:08:29.868 END TEST lvs_grow_clean 00:08:29.868 ************************************ 00:08:29.868 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:29.868 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:29.868 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:29.868 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:29.868 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:29.868 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:29.868 ************************************ 00:08:29.868 START TEST lvs_grow_dirty 00:08:29.868 ************************************ 00:08:29.868 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:08:29.868 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:29.868 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:29.868 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:29.868 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:29.868 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:29.868 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:29.868 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:29.868 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:29.868 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:30.125 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:30.125 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:30.125 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=2858bdc6-dc8e-4af3-bf7b-d363ee6a68b3 00:08:30.125 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:30.125 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2858bdc6-dc8e-4af3-bf7b-d363ee6a68b3 00:08:30.381 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:30.381 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:30.381 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2858bdc6-dc8e-4af3-bf7b-d363ee6a68b3 lvol 150 00:08:30.639 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=e566a0e1-8925-4ec0-8c0d-fa6183815a28 00:08:30.639 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:30.639 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:30.896 [2024-11-06 14:12:58.316601] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:30.896 [2024-11-06 14:12:58.316671] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:30.896 true 00:08:30.896 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2858bdc6-dc8e-4af3-bf7b-d363ee6a68b3 00:08:30.896 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:31.154 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:31.154 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:31.154 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e566a0e1-8925-4ec0-8c0d-fa6183815a28 00:08:31.411 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:08:31.668 [2024-11-06 14:12:59.145057] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:31.668 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:31.926 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=67581 00:08:31.926 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:31.926 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:31.926 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 67581 /var/tmp/bdevperf.sock 00:08:31.926 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 67581 ']' 00:08:31.926 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:31.926 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:31.926 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:31.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:31.926 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:31.926 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:31.926 [2024-11-06 14:12:59.444630] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:08:31.926 [2024-11-06 14:12:59.444785] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67581 ] 00:08:32.182 [2024-11-06 14:12:59.611204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.182 [2024-11-06 14:12:59.709477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:32.758 14:13:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:32.758 14:13:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:08:32.758 14:13:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:33.016 Nvme0n1 00:08:33.016 14:13:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:33.275 [ 00:08:33.275 { 00:08:33.275 "aliases": [ 00:08:33.275 "e566a0e1-8925-4ec0-8c0d-fa6183815a28" 00:08:33.275 ], 00:08:33.275 "assigned_rate_limits": { 00:08:33.275 "r_mbytes_per_sec": 0, 00:08:33.275 "rw_ios_per_sec": 0, 00:08:33.275 "rw_mbytes_per_sec": 0, 00:08:33.275 "w_mbytes_per_sec": 0 00:08:33.275 }, 00:08:33.275 "block_size": 4096, 00:08:33.275 "claimed": false, 00:08:33.275 "driver_specific": { 00:08:33.275 "mp_policy": "active_passive", 00:08:33.275 "nvme": [ 00:08:33.275 { 00:08:33.275 "ctrlr_data": { 00:08:33.275 "ana_reporting": false, 00:08:33.275 "cntlid": 1, 00:08:33.275 "firmware_revision": "25.01", 00:08:33.275 "model_number": "SPDK bdev Controller", 00:08:33.275 "multi_ctrlr": true, 00:08:33.275 "oacs": { 00:08:33.275 "firmware": 0, 00:08:33.275 "format": 0, 00:08:33.275 "ns_manage": 0, 00:08:33.275 "security": 0 00:08:33.275 }, 00:08:33.275 "serial_number": "SPDK0", 00:08:33.275 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:33.275 "vendor_id": "0x8086" 00:08:33.275 }, 00:08:33.275 "ns_data": { 00:08:33.275 "can_share": true, 00:08:33.275 "id": 1 00:08:33.275 }, 00:08:33.275 "trid": { 00:08:33.275 "adrfam": "IPv4", 00:08:33.275 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:33.275 "traddr": "10.0.0.3", 00:08:33.275 "trsvcid": "4420", 00:08:33.275 "trtype": "TCP" 00:08:33.275 }, 00:08:33.275 "vs": { 00:08:33.275 "nvme_version": "1.3" 00:08:33.275 } 00:08:33.275 } 00:08:33.275 ] 00:08:33.275 }, 00:08:33.275 "memory_domains": [ 00:08:33.275 { 00:08:33.275 "dma_device_id": "system", 00:08:33.275 "dma_device_type": 1 00:08:33.275 } 00:08:33.275 ], 00:08:33.275 "name": "Nvme0n1", 00:08:33.275 "num_blocks": 38912, 00:08:33.275 "numa_id": -1, 00:08:33.275 "product_name": "NVMe disk", 00:08:33.275 "supported_io_types": { 00:08:33.275 "abort": true, 00:08:33.275 "compare": true, 00:08:33.275 "compare_and_write": true, 00:08:33.275 "copy": true, 00:08:33.275 "flush": true, 00:08:33.275 "get_zone_info": false, 00:08:33.275 "nvme_admin": true, 00:08:33.275 "nvme_io": true, 00:08:33.275 "nvme_io_md": false, 00:08:33.275 "nvme_iov_md": false, 00:08:33.275 "read": true, 00:08:33.275 "reset": true, 00:08:33.275 "seek_data": false, 00:08:33.275 "seek_hole": false, 00:08:33.275 "unmap": true, 00:08:33.275 "write": true, 00:08:33.275 "write_zeroes": true, 00:08:33.275 "zcopy": false, 00:08:33.275 "zone_append": false, 00:08:33.275 "zone_management": false 00:08:33.275 }, 00:08:33.275 "uuid": "e566a0e1-8925-4ec0-8c0d-fa6183815a28", 00:08:33.275 "zoned": false 00:08:33.275 } 00:08:33.275 ] 00:08:33.275 14:13:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=67621 00:08:33.275 14:13:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:33.275 14:13:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:33.275 Running I/O for 10 seconds... 00:08:34.648 Latency(us) 00:08:34.648 [2024-11-06T14:13:02.283Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:34.648 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:34.648 Nvme0n1 : 1.00 11096.00 43.34 0.00 0.00 0.00 0.00 0.00 00:08:34.648 [2024-11-06T14:13:02.283Z] =================================================================================================================== 00:08:34.648 [2024-11-06T14:13:02.283Z] Total : 11096.00 43.34 0.00 0.00 0.00 0.00 0.00 00:08:34.648 00:08:35.212 14:13:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 2858bdc6-dc8e-4af3-bf7b-d363ee6a68b3 00:08:35.469 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:35.469 Nvme0n1 : 2.00 11519.00 45.00 0.00 0.00 0.00 0.00 0.00 00:08:35.469 [2024-11-06T14:13:03.104Z] =================================================================================================================== 00:08:35.469 [2024-11-06T14:13:03.104Z] Total : 11519.00 45.00 0.00 0.00 0.00 0.00 0.00 00:08:35.469 00:08:35.469 true 00:08:35.469 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2858bdc6-dc8e-4af3-bf7b-d363ee6a68b3 00:08:35.469 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:35.726 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:35.726 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:35.726 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 67621 00:08:36.321 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:36.321 Nvme0n1 : 3.00 11567.00 45.18 0.00 0.00 0.00 0.00 0.00 00:08:36.321 [2024-11-06T14:13:03.956Z] =================================================================================================================== 00:08:36.321 [2024-11-06T14:13:03.956Z] Total : 11567.00 45.18 0.00 0.00 0.00 0.00 0.00 00:08:36.321 00:08:37.254 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:37.254 Nvme0n1 : 4.00 11600.25 45.31 0.00 0.00 0.00 0.00 0.00 00:08:37.254 [2024-11-06T14:13:04.889Z] =================================================================================================================== 00:08:37.254 [2024-11-06T14:13:04.889Z] Total : 11600.25 45.31 0.00 0.00 0.00 0.00 0.00 00:08:37.254 00:08:38.627 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:38.627 Nvme0n1 : 5.00 11538.80 45.07 0.00 0.00 0.00 0.00 0.00 00:08:38.627 [2024-11-06T14:13:06.262Z] =================================================================================================================== 00:08:38.627 [2024-11-06T14:13:06.262Z] Total : 11538.80 45.07 0.00 0.00 0.00 0.00 0.00 00:08:38.627 00:08:39.557 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:39.557 Nvme0n1 : 6.00 10113.50 39.51 0.00 0.00 0.00 0.00 0.00 00:08:39.557 [2024-11-06T14:13:07.192Z] =================================================================================================================== 00:08:39.557 [2024-11-06T14:13:07.192Z] Total : 10113.50 39.51 0.00 0.00 0.00 0.00 0.00 00:08:39.557 00:08:40.517 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:40.517 Nvme0n1 : 7.00 10213.86 39.90 0.00 0.00 0.00 0.00 0.00 00:08:40.517 [2024-11-06T14:13:08.152Z] =================================================================================================================== 00:08:40.517 [2024-11-06T14:13:08.152Z] Total : 10213.86 39.90 0.00 0.00 0.00 0.00 0.00 00:08:40.517 00:08:41.453 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:41.453 Nvme0n1 : 8.00 10349.62 40.43 0.00 0.00 0.00 0.00 0.00 00:08:41.453 [2024-11-06T14:13:09.088Z] =================================================================================================================== 00:08:41.453 [2024-11-06T14:13:09.088Z] Total : 10349.62 40.43 0.00 0.00 0.00 0.00 0.00 00:08:41.453 00:08:42.385 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:42.385 Nvme0n1 : 9.00 10301.67 40.24 0.00 0.00 0.00 0.00 0.00 00:08:42.385 [2024-11-06T14:13:10.020Z] =================================================================================================================== 00:08:42.385 [2024-11-06T14:13:10.020Z] Total : 10301.67 40.24 0.00 0.00 0.00 0.00 0.00 00:08:42.385 00:08:43.316 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:43.316 Nvme0n1 : 10.00 10338.90 40.39 0.00 0.00 0.00 0.00 0.00 00:08:43.316 [2024-11-06T14:13:10.951Z] =================================================================================================================== 00:08:43.316 [2024-11-06T14:13:10.951Z] Total : 10338.90 40.39 0.00 0.00 0.00 0.00 0.00 00:08:43.316 00:08:43.316 00:08:43.316 Latency(us) 00:08:43.316 [2024-11-06T14:13:10.951Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:43.316 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:43.316 Nvme0n1 : 10.01 10340.22 40.39 0.00 0.00 12374.47 3755.72 767880.27 00:08:43.316 [2024-11-06T14:13:10.951Z] =================================================================================================================== 00:08:43.316 [2024-11-06T14:13:10.951Z] Total : 10340.22 40.39 0.00 0.00 12374.47 3755.72 767880.27 00:08:43.316 { 00:08:43.316 "results": [ 00:08:43.316 { 00:08:43.316 "job": "Nvme0n1", 00:08:43.316 "core_mask": "0x2", 00:08:43.316 "workload": "randwrite", 00:08:43.316 "status": "finished", 00:08:43.316 "queue_depth": 128, 00:08:43.316 "io_size": 4096, 00:08:43.316 "runtime": 10.011105, 00:08:43.316 "iops": 10340.217188811825, 00:08:43.316 "mibps": 40.39147339379619, 00:08:43.316 "io_failed": 0, 00:08:43.316 "io_timeout": 0, 00:08:43.316 "avg_latency_us": 12374.46786785671, 00:08:43.316 "min_latency_us": 3755.716923076923, 00:08:43.316 "max_latency_us": 767880.2707692308 00:08:43.316 } 00:08:43.316 ], 00:08:43.316 "core_count": 1 00:08:43.316 } 00:08:43.316 14:13:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 67581 00:08:43.316 14:13:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 67581 ']' 00:08:43.317 14:13:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 67581 00:08:43.317 14:13:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:08:43.317 14:13:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:43.317 14:13:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67581 00:08:43.317 14:13:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:08:43.317 14:13:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:08:43.317 killing process with pid 67581 00:08:43.317 14:13:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67581' 00:08:43.317 14:13:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 67581 00:08:43.317 Received shutdown signal, test time was about 10.000000 seconds 00:08:43.317 00:08:43.317 Latency(us) 00:08:43.317 [2024-11-06T14:13:10.952Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:43.317 [2024-11-06T14:13:10.952Z] =================================================================================================================== 00:08:43.317 [2024-11-06T14:13:10.952Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:43.317 14:13:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 67581 00:08:43.881 14:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:44.137 14:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:44.394 14:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2858bdc6-dc8e-4af3-bf7b-d363ee6a68b3 00:08:44.394 14:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:44.652 14:13:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:44.652 14:13:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:44.652 14:13:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 66981 00:08:44.652 14:13:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 66981 00:08:44.652 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 66981 Killed "${NVMF_APP[@]}" "$@" 00:08:44.652 14:13:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:44.652 14:13:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:44.652 14:13:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:44.652 14:13:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:44.652 14:13:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:44.652 14:13:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=67791 00:08:44.652 14:13:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 67791 00:08:44.652 14:13:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 67791 ']' 00:08:44.652 14:13:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:44.652 14:13:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:44.652 14:13:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:44.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:44.652 14:13:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:44.652 14:13:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:44.652 14:13:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:44.652 [2024-11-06 14:13:12.251752] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:08:44.652 [2024-11-06 14:13:12.251886] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:44.910 [2024-11-06 14:13:12.420050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.910 [2024-11-06 14:13:12.538795] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:44.910 [2024-11-06 14:13:12.538864] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:44.910 [2024-11-06 14:13:12.538878] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:44.910 [2024-11-06 14:13:12.538896] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:44.910 [2024-11-06 14:13:12.538905] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:44.910 [2024-11-06 14:13:12.539832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.479 14:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:45.479 14:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:08:45.479 14:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:45.479 14:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:45.479 14:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:45.479 14:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:45.479 14:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:45.736 [2024-11-06 14:13:13.261932] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:45.736 [2024-11-06 14:13:13.262390] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:45.736 [2024-11-06 14:13:13.264984] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:45.736 14:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:45.736 14:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev e566a0e1-8925-4ec0-8c0d-fa6183815a28 00:08:45.736 14:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=e566a0e1-8925-4ec0-8c0d-fa6183815a28 00:08:45.736 14:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:45.736 14:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:08:45.736 14:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:45.736 14:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:45.736 14:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:45.993 14:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e566a0e1-8925-4ec0-8c0d-fa6183815a28 -t 2000 00:08:46.251 [ 00:08:46.251 { 00:08:46.251 "aliases": [ 00:08:46.251 "lvs/lvol" 00:08:46.251 ], 00:08:46.251 "assigned_rate_limits": { 00:08:46.251 "r_mbytes_per_sec": 0, 00:08:46.251 "rw_ios_per_sec": 0, 00:08:46.251 "rw_mbytes_per_sec": 0, 00:08:46.251 "w_mbytes_per_sec": 0 00:08:46.251 }, 00:08:46.251 "block_size": 4096, 00:08:46.251 "claimed": false, 00:08:46.251 "driver_specific": { 00:08:46.251 "lvol": { 00:08:46.251 "base_bdev": "aio_bdev", 00:08:46.251 "clone": false, 00:08:46.251 "esnap_clone": false, 00:08:46.251 "lvol_store_uuid": "2858bdc6-dc8e-4af3-bf7b-d363ee6a68b3", 00:08:46.251 "num_allocated_clusters": 38, 00:08:46.251 "snapshot": false, 00:08:46.251 "thin_provision": false 00:08:46.251 } 00:08:46.251 }, 00:08:46.251 "name": "e566a0e1-8925-4ec0-8c0d-fa6183815a28", 00:08:46.251 "num_blocks": 38912, 00:08:46.251 "product_name": "Logical Volume", 00:08:46.251 "supported_io_types": { 00:08:46.251 "abort": false, 00:08:46.251 "compare": false, 00:08:46.251 "compare_and_write": false, 00:08:46.251 "copy": false, 00:08:46.251 "flush": false, 00:08:46.251 "get_zone_info": false, 00:08:46.251 "nvme_admin": false, 00:08:46.251 "nvme_io": false, 00:08:46.251 "nvme_io_md": false, 00:08:46.251 "nvme_iov_md": false, 00:08:46.251 "read": true, 00:08:46.251 "reset": true, 00:08:46.251 "seek_data": true, 00:08:46.251 "seek_hole": true, 00:08:46.251 "unmap": true, 00:08:46.252 "write": true, 00:08:46.252 "write_zeroes": true, 00:08:46.252 "zcopy": false, 00:08:46.252 "zone_append": false, 00:08:46.252 "zone_management": false 00:08:46.252 }, 00:08:46.252 "uuid": "e566a0e1-8925-4ec0-8c0d-fa6183815a28", 00:08:46.252 "zoned": false 00:08:46.252 } 00:08:46.252 ] 00:08:46.252 14:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:08:46.252 14:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2858bdc6-dc8e-4af3-bf7b-d363ee6a68b3 00:08:46.252 14:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:46.509 14:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:46.509 14:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2858bdc6-dc8e-4af3-bf7b-d363ee6a68b3 00:08:46.509 14:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:46.509 14:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:46.509 14:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:46.766 [2024-11-06 14:13:14.315250] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:47.024 14:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2858bdc6-dc8e-4af3-bf7b-d363ee6a68b3 00:08:47.024 14:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:08:47.024 14:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2858bdc6-dc8e-4af3-bf7b-d363ee6a68b3 00:08:47.024 14:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:47.024 14:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:47.024 14:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:47.024 14:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:47.024 14:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:47.024 14:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:47.024 14:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:47.024 14:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:47.024 14:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2858bdc6-dc8e-4af3-bf7b-d363ee6a68b3 00:08:47.281 2024/11/06 14:13:14 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:2858bdc6-dc8e-4af3-bf7b-d363ee6a68b3], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:08:47.281 request: 00:08:47.281 { 00:08:47.281 "method": "bdev_lvol_get_lvstores", 00:08:47.281 "params": { 00:08:47.281 "uuid": "2858bdc6-dc8e-4af3-bf7b-d363ee6a68b3" 00:08:47.281 } 00:08:47.281 } 00:08:47.281 Got JSON-RPC error response 00:08:47.281 GoRPCClient: error on JSON-RPC call 00:08:47.281 14:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:08:47.281 14:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:47.281 14:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:47.281 14:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:47.281 14:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:47.538 aio_bdev 00:08:47.538 14:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e566a0e1-8925-4ec0-8c0d-fa6183815a28 00:08:47.538 14:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=e566a0e1-8925-4ec0-8c0d-fa6183815a28 00:08:47.538 14:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:47.538 14:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:08:47.538 14:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:47.538 14:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:47.538 14:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:47.796 14:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e566a0e1-8925-4ec0-8c0d-fa6183815a28 -t 2000 00:08:47.796 [ 00:08:47.796 { 00:08:47.796 "aliases": [ 00:08:47.796 "lvs/lvol" 00:08:47.796 ], 00:08:47.796 "assigned_rate_limits": { 00:08:47.796 "r_mbytes_per_sec": 0, 00:08:47.796 "rw_ios_per_sec": 0, 00:08:47.796 "rw_mbytes_per_sec": 0, 00:08:47.796 "w_mbytes_per_sec": 0 00:08:47.796 }, 00:08:47.796 "block_size": 4096, 00:08:47.796 "claimed": false, 00:08:47.796 "driver_specific": { 00:08:47.796 "lvol": { 00:08:47.796 "base_bdev": "aio_bdev", 00:08:47.796 "clone": false, 00:08:47.796 "esnap_clone": false, 00:08:47.796 "lvol_store_uuid": "2858bdc6-dc8e-4af3-bf7b-d363ee6a68b3", 00:08:47.796 "num_allocated_clusters": 38, 00:08:47.796 "snapshot": false, 00:08:47.796 "thin_provision": false 00:08:47.796 } 00:08:47.796 }, 00:08:47.796 "name": "e566a0e1-8925-4ec0-8c0d-fa6183815a28", 00:08:47.796 "num_blocks": 38912, 00:08:47.796 "product_name": "Logical Volume", 00:08:47.796 "supported_io_types": { 00:08:47.796 "abort": false, 00:08:47.796 "compare": false, 00:08:47.796 "compare_and_write": false, 00:08:47.796 "copy": false, 00:08:47.796 "flush": false, 00:08:47.796 "get_zone_info": false, 00:08:47.796 "nvme_admin": false, 00:08:47.796 "nvme_io": false, 00:08:47.796 "nvme_io_md": false, 00:08:47.796 "nvme_iov_md": false, 00:08:47.796 "read": true, 00:08:47.796 "reset": true, 00:08:47.796 "seek_data": true, 00:08:47.796 "seek_hole": true, 00:08:47.796 "unmap": true, 00:08:47.796 "write": true, 00:08:47.796 "write_zeroes": true, 00:08:47.796 "zcopy": false, 00:08:47.796 "zone_append": false, 00:08:47.796 "zone_management": false 00:08:47.796 }, 00:08:47.797 "uuid": "e566a0e1-8925-4ec0-8c0d-fa6183815a28", 00:08:47.797 "zoned": false 00:08:47.797 } 00:08:47.797 ] 00:08:47.797 14:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:08:47.797 14:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2858bdc6-dc8e-4af3-bf7b-d363ee6a68b3 00:08:47.797 14:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:48.054 14:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:48.054 14:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2858bdc6-dc8e-4af3-bf7b-d363ee6a68b3 00:08:48.054 14:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:48.311 14:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:48.311 14:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete e566a0e1-8925-4ec0-8c0d-fa6183815a28 00:08:48.569 14:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2858bdc6-dc8e-4af3-bf7b-d363ee6a68b3 00:08:48.826 14:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:48.827 14:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:49.391 00:08:49.391 real 0m19.524s 00:08:49.391 user 0m41.542s 00:08:49.391 sys 0m5.506s 00:08:49.391 14:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:49.391 14:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:49.391 ************************************ 00:08:49.391 END TEST lvs_grow_dirty 00:08:49.391 ************************************ 00:08:49.391 14:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:49.391 14:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:08:49.391 14:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:08:49.391 14:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:08:49.391 14:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:49.391 14:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:08:49.391 14:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:08:49.391 14:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:08:49.391 14:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:49.391 nvmf_trace.0 00:08:49.391 14:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:08:49.391 14:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:49.391 14:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:49.391 14:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:49.959 14:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:49.959 14:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:49.959 14:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:49.959 14:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:49.959 rmmod nvme_tcp 00:08:49.959 rmmod nvme_fabrics 00:08:49.959 rmmod nvme_keyring 00:08:49.959 14:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:49.959 14:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:49.959 14:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:49.959 14:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 67791 ']' 00:08:49.959 14:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 67791 00:08:49.959 14:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 67791 ']' 00:08:49.959 14:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 67791 00:08:49.959 14:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:08:49.959 14:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:49.959 14:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67791 00:08:49.959 14:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:49.959 killing process with pid 67791 00:08:49.959 14:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:49.959 14:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67791' 00:08:49.959 14:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 67791 00:08:49.959 14:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 67791 00:08:50.524 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:50.524 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:50.524 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:50.524 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:50.524 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:08:50.524 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:08:50.524 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:50.782 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:50.782 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:50.782 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:50.782 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:50.782 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:50.782 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:50.782 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:50.782 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:50.782 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:50.782 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:50.782 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:50.782 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:50.782 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:50.782 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:50.782 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:50.782 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:50.782 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:50.782 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:50.782 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:50.782 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:08:50.782 00:08:50.782 real 0m40.088s 00:08:50.782 user 1m4.577s 00:08:50.782 sys 0m8.348s 00:08:50.782 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:50.782 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:50.782 ************************************ 00:08:50.782 END TEST nvmf_lvs_grow 00:08:50.782 ************************************ 00:08:50.782 14:13:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:50.782 14:13:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:50.782 14:13:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:50.782 14:13:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:51.040 ************************************ 00:08:51.040 START TEST nvmf_bdev_io_wait 00:08:51.040 ************************************ 00:08:51.040 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:51.040 * Looking for test storage... 00:08:51.040 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:51.040 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:51.040 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:08:51.040 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:51.040 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:51.040 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:51.040 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:51.040 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:51.040 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:51.040 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:51.040 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:51.040 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:51.040 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:51.040 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:51.040 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:51.040 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:51.040 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:51.040 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:51.040 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:51.041 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:51.041 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:51.041 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:51.041 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:51.041 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:51.041 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:51.041 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:51.041 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:51.041 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:51.041 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:51.041 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:51.041 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:51.041 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:51.041 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:51.041 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:51.041 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:51.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.041 --rc genhtml_branch_coverage=1 00:08:51.041 --rc genhtml_function_coverage=1 00:08:51.041 --rc genhtml_legend=1 00:08:51.041 --rc geninfo_all_blocks=1 00:08:51.041 --rc geninfo_unexecuted_blocks=1 00:08:51.041 00:08:51.041 ' 00:08:51.041 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:51.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.041 --rc genhtml_branch_coverage=1 00:08:51.041 --rc genhtml_function_coverage=1 00:08:51.041 --rc genhtml_legend=1 00:08:51.041 --rc geninfo_all_blocks=1 00:08:51.041 --rc geninfo_unexecuted_blocks=1 00:08:51.041 00:08:51.041 ' 00:08:51.041 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:51.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.041 --rc genhtml_branch_coverage=1 00:08:51.041 --rc genhtml_function_coverage=1 00:08:51.041 --rc genhtml_legend=1 00:08:51.041 --rc geninfo_all_blocks=1 00:08:51.041 --rc geninfo_unexecuted_blocks=1 00:08:51.041 00:08:51.041 ' 00:08:51.041 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:51.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.041 --rc genhtml_branch_coverage=1 00:08:51.041 --rc genhtml_function_coverage=1 00:08:51.041 --rc genhtml_legend=1 00:08:51.041 --rc geninfo_all_blocks=1 00:08:51.041 --rc geninfo_unexecuted_blocks=1 00:08:51.041 00:08:51.041 ' 00:08:51.041 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:51.041 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:51.041 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:51.041 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:51.041 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:51.041 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:51.041 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:51.041 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:51.041 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:51.041 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:51.041 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:51.041 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:51.041 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:08:51.041 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:08:51.041 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:51.041 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:51.041 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:51.041 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:51.041 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:51.041 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:51.041 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:51.041 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:51.041 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:51.041 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.041 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.041 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.041 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:51.041 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.041 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:51.041 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:51.041 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:51.041 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:51.041 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:51.041 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:51.041 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:51.041 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:51.041 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:51.041 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:51.041 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:51.041 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:51.041 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:51.041 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:51.041 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:51.041 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:51.041 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:51.041 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:51.041 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:51.041 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:51.041 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:51.041 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:51.041 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:51.041 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:51.041 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:51.041 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:51.041 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:51.041 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:51.041 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:51.041 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:51.041 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:51.041 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:51.042 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:51.042 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:51.042 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:51.042 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:51.042 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:51.042 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:51.042 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:51.042 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:51.042 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:51.042 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:51.042 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:51.042 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:51.042 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:51.042 Cannot find device "nvmf_init_br" 00:08:51.042 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:08:51.042 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:51.042 Cannot find device "nvmf_init_br2" 00:08:51.042 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:08:51.042 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:51.042 Cannot find device "nvmf_tgt_br" 00:08:51.042 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:08:51.042 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:51.042 Cannot find device "nvmf_tgt_br2" 00:08:51.042 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:08:51.042 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:51.042 Cannot find device "nvmf_init_br" 00:08:51.042 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:08:51.042 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:51.042 Cannot find device "nvmf_init_br2" 00:08:51.042 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:08:51.042 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:51.042 Cannot find device "nvmf_tgt_br" 00:08:51.042 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:08:51.042 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:51.042 Cannot find device "nvmf_tgt_br2" 00:08:51.042 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:08:51.042 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:51.042 Cannot find device "nvmf_br" 00:08:51.042 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:08:51.042 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:51.042 Cannot find device "nvmf_init_if" 00:08:51.042 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:08:51.042 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:51.300 Cannot find device "nvmf_init_if2" 00:08:51.300 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:08:51.300 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:51.300 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:51.300 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:08:51.300 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:51.300 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:51.300 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:08:51.300 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:51.300 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:51.300 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:51.300 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:51.300 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:51.300 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:51.300 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:51.300 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:51.300 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:51.300 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:51.300 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:51.300 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:51.300 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:51.300 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:51.300 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:51.300 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:51.300 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:51.300 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:51.300 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:51.300 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:51.300 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:51.300 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:51.300 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:51.300 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:51.300 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:51.300 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:51.300 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:51.300 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:51.300 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:51.300 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:51.300 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:51.300 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:51.301 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:51.301 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:51.301 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:08:51.301 00:08:51.301 --- 10.0.0.3 ping statistics --- 00:08:51.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.301 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:08:51.301 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:51.301 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:51.301 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.028 ms 00:08:51.301 00:08:51.301 --- 10.0.0.4 ping statistics --- 00:08:51.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.301 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:08:51.301 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:51.301 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:51.301 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:08:51.301 00:08:51.301 --- 10.0.0.1 ping statistics --- 00:08:51.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.301 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:08:51.301 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:51.301 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:51.301 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:08:51.301 00:08:51.301 --- 10.0.0.2 ping statistics --- 00:08:51.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.301 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:08:51.301 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:51.301 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:08:51.301 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:51.301 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:51.301 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:51.301 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:51.301 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:51.301 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:51.301 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:51.301 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:51.301 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:51.301 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:51.301 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:51.301 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=68266 00:08:51.301 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 68266 00:08:51.301 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 68266 ']' 00:08:51.301 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:51.301 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.301 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:51.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.301 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.301 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:51.301 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:51.558 [2024-11-06 14:13:18.968691] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:08:51.558 [2024-11-06 14:13:18.968801] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:51.558 [2024-11-06 14:13:19.125317] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:51.816 [2024-11-06 14:13:19.245740] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:51.816 [2024-11-06 14:13:19.245794] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:51.816 [2024-11-06 14:13:19.245807] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:51.816 [2024-11-06 14:13:19.245815] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:51.816 [2024-11-06 14:13:19.245825] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:51.816 [2024-11-06 14:13:19.247419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:51.816 [2024-11-06 14:13:19.247989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:51.816 [2024-11-06 14:13:19.248047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.816 [2024-11-06 14:13:19.248071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:52.384 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:52.384 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:08:52.384 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:52.384 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:52.384 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:52.384 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:52.384 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:52.384 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.384 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:52.384 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.384 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:52.384 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.384 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:52.642 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.642 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:52.642 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.642 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:52.642 [2024-11-06 14:13:20.033454] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:52.642 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.642 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:52.642 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.642 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:52.642 Malloc0 00:08:52.642 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.642 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:52.642 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.642 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:52.642 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.642 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:52.642 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.642 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:52.642 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.642 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:52.642 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.642 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:52.642 [2024-11-06 14:13:20.120415] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:52.642 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.642 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=68320 00:08:52.642 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:52.642 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=68322 00:08:52.642 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:52.642 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:52.642 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:52.642 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:52.642 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:52.642 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=68324 00:08:52.642 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:52.642 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:52.642 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:52.642 { 00:08:52.642 "params": { 00:08:52.642 "name": "Nvme$subsystem", 00:08:52.642 "trtype": "$TEST_TRANSPORT", 00:08:52.642 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:52.642 "adrfam": "ipv4", 00:08:52.642 "trsvcid": "$NVMF_PORT", 00:08:52.642 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:52.642 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:52.642 "hdgst": ${hdgst:-false}, 00:08:52.642 "ddgst": ${ddgst:-false} 00:08:52.642 }, 00:08:52.642 "method": "bdev_nvme_attach_controller" 00:08:52.642 } 00:08:52.642 EOF 00:08:52.642 )") 00:08:52.642 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:52.642 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:52.642 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:52.642 { 00:08:52.642 "params": { 00:08:52.642 "name": "Nvme$subsystem", 00:08:52.642 "trtype": "$TEST_TRANSPORT", 00:08:52.642 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:52.642 "adrfam": "ipv4", 00:08:52.642 "trsvcid": "$NVMF_PORT", 00:08:52.642 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:52.642 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:52.642 "hdgst": ${hdgst:-false}, 00:08:52.642 "ddgst": ${ddgst:-false} 00:08:52.642 }, 00:08:52.642 "method": "bdev_nvme_attach_controller" 00:08:52.642 } 00:08:52.642 EOF 00:08:52.642 )") 00:08:52.642 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:52.642 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=68326 00:08:52.642 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:52.642 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:52.642 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:52.642 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:52.642 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:52.642 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:52.642 { 00:08:52.642 "params": { 00:08:52.642 "name": "Nvme$subsystem", 00:08:52.642 "trtype": "$TEST_TRANSPORT", 00:08:52.642 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:52.642 "adrfam": "ipv4", 00:08:52.642 "trsvcid": "$NVMF_PORT", 00:08:52.642 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:52.642 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:52.642 "hdgst": ${hdgst:-false}, 00:08:52.642 "ddgst": ${ddgst:-false} 00:08:52.642 }, 00:08:52.642 "method": "bdev_nvme_attach_controller" 00:08:52.642 } 00:08:52.642 EOF 00:08:52.642 )") 00:08:52.642 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:52.642 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:52.642 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:52.642 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:52.642 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:52.642 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:52.642 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:52.642 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:52.642 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:52.642 "params": { 00:08:52.642 "name": "Nvme1", 00:08:52.642 "trtype": "tcp", 00:08:52.642 "traddr": "10.0.0.3", 00:08:52.642 "adrfam": "ipv4", 00:08:52.642 "trsvcid": "4420", 00:08:52.642 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:52.642 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:52.642 "hdgst": false, 00:08:52.642 "ddgst": false 00:08:52.642 }, 00:08:52.642 "method": "bdev_nvme_attach_controller" 00:08:52.642 }' 00:08:52.642 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:52.642 "params": { 00:08:52.642 "name": "Nvme1", 00:08:52.642 "trtype": "tcp", 00:08:52.643 "traddr": "10.0.0.3", 00:08:52.643 "adrfam": "ipv4", 00:08:52.643 "trsvcid": "4420", 00:08:52.643 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:52.643 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:52.643 "hdgst": false, 00:08:52.643 "ddgst": false 00:08:52.643 }, 00:08:52.643 "method": "bdev_nvme_attach_controller" 00:08:52.643 }' 00:08:52.643 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:52.643 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:52.643 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:52.643 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:52.643 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:52.643 { 00:08:52.643 "params": { 00:08:52.643 "name": "Nvme$subsystem", 00:08:52.643 "trtype": "$TEST_TRANSPORT", 00:08:52.643 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:52.643 "adrfam": "ipv4", 00:08:52.643 "trsvcid": "$NVMF_PORT", 00:08:52.643 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:52.643 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:52.643 "hdgst": ${hdgst:-false}, 00:08:52.643 "ddgst": ${ddgst:-false} 00:08:52.643 }, 00:08:52.643 "method": "bdev_nvme_attach_controller" 00:08:52.643 } 00:08:52.643 EOF 00:08:52.643 )") 00:08:52.643 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:52.643 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:52.643 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:52.643 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:52.643 "params": { 00:08:52.643 "name": "Nvme1", 00:08:52.643 "trtype": "tcp", 00:08:52.643 "traddr": "10.0.0.3", 00:08:52.643 "adrfam": "ipv4", 00:08:52.643 "trsvcid": "4420", 00:08:52.643 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:52.643 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:52.643 "hdgst": false, 00:08:52.643 "ddgst": false 00:08:52.643 }, 00:08:52.643 "method": "bdev_nvme_attach_controller" 00:08:52.643 }' 00:08:52.643 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:52.643 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:52.643 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:52.643 "params": { 00:08:52.643 "name": "Nvme1", 00:08:52.643 "trtype": "tcp", 00:08:52.643 "traddr": "10.0.0.3", 00:08:52.643 "adrfam": "ipv4", 00:08:52.643 "trsvcid": "4420", 00:08:52.643 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:52.643 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:52.643 "hdgst": false, 00:08:52.643 "ddgst": false 00:08:52.643 }, 00:08:52.643 "method": "bdev_nvme_attach_controller" 00:08:52.643 }' 00:08:52.643 [2024-11-06 14:13:20.202031] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:08:52.643 [2024-11-06 14:13:20.202146] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:52.643 [2024-11-06 14:13:20.204997] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:08:52.643 [2024-11-06 14:13:20.205071] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:52.643 [2024-11-06 14:13:20.210800] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:08:52.643 [2024-11-06 14:13:20.210897] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:52.643 [2024-11-06 14:13:20.213471] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:08:52.643 [2024-11-06 14:13:20.213754] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:52.643 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 68320 00:08:52.906 [2024-11-06 14:13:20.390473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.906 [2024-11-06 14:13:20.445470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.906 [2024-11-06 14:13:20.485604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:08:52.906 [2024-11-06 14:13:20.490659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.172 [2024-11-06 14:13:20.537665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.172 [2024-11-06 14:13:20.541876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:53.172 [2024-11-06 14:13:20.586733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:53.172 [2024-11-06 14:13:20.632196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:53.172 Running I/O for 1 seconds... 00:08:53.429 Running I/O for 1 seconds... 00:08:53.429 Running I/O for 1 seconds... 00:08:53.429 Running I/O for 1 seconds... 00:08:54.362 11828.00 IOPS, 46.20 MiB/s 00:08:54.362 Latency(us) 00:08:54.362 [2024-11-06T14:13:21.997Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:54.362 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:54.362 Nvme1n1 : 1.01 11898.14 46.48 0.00 0.00 10724.56 4133.81 18450.90 00:08:54.362 [2024-11-06T14:13:21.997Z] =================================================================================================================== 00:08:54.362 [2024-11-06T14:13:21.997Z] Total : 11898.14 46.48 0.00 0.00 10724.56 4133.81 18450.90 00:08:54.362 8780.00 IOPS, 34.30 MiB/s 00:08:54.362 Latency(us) 00:08:54.362 [2024-11-06T14:13:21.997Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:54.362 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:54.362 Nvme1n1 : 1.01 8838.99 34.53 0.00 0.00 14418.58 6553.60 24298.73 00:08:54.362 [2024-11-06T14:13:21.997Z] =================================================================================================================== 00:08:54.362 [2024-11-06T14:13:21.997Z] Total : 8838.99 34.53 0.00 0.00 14418.58 6553.60 24298.73 00:08:54.362 9221.00 IOPS, 36.02 MiB/s 00:08:54.362 Latency(us) 00:08:54.362 [2024-11-06T14:13:21.997Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:54.362 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:54.362 Nvme1n1 : 1.01 9304.87 36.35 0.00 0.00 13706.03 5671.38 25508.63 00:08:54.362 [2024-11-06T14:13:21.997Z] =================================================================================================================== 00:08:54.362 [2024-11-06T14:13:21.997Z] Total : 9304.87 36.35 0.00 0.00 13706.03 5671.38 25508.63 00:08:54.362 180632.00 IOPS, 705.59 MiB/s 00:08:54.362 Latency(us) 00:08:54.362 [2024-11-06T14:13:21.997Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:54.362 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:54.362 Nvme1n1 : 1.00 180281.60 704.23 0.00 0.00 706.29 321.38 1928.27 00:08:54.362 [2024-11-06T14:13:21.997Z] =================================================================================================================== 00:08:54.362 [2024-11-06T14:13:21.997Z] Total : 180281.60 704.23 0.00 0.00 706.29 321.38 1928.27 00:08:54.927 14:13:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 68322 00:08:54.927 14:13:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 68324 00:08:54.927 14:13:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 68326 00:08:54.927 14:13:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:54.927 14:13:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.927 14:13:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:54.927 14:13:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.927 14:13:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:54.927 14:13:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:54.927 14:13:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:54.927 14:13:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:55.185 14:13:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:55.185 14:13:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:55.185 14:13:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:55.185 14:13:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:55.185 rmmod nvme_tcp 00:08:55.185 rmmod nvme_fabrics 00:08:55.185 rmmod nvme_keyring 00:08:55.185 14:13:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:55.185 14:13:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:55.185 14:13:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:55.185 14:13:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 68266 ']' 00:08:55.185 14:13:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 68266 00:08:55.185 14:13:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 68266 ']' 00:08:55.185 14:13:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 68266 00:08:55.185 14:13:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:08:55.185 14:13:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:55.185 14:13:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 68266 00:08:55.185 14:13:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:55.185 killing process with pid 68266 00:08:55.185 14:13:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:55.185 14:13:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 68266' 00:08:55.185 14:13:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 68266 00:08:55.185 14:13:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 68266 00:08:55.751 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:55.751 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:55.751 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:55.751 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:55.751 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:08:55.751 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:55.751 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:08:55.751 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:55.751 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:55.751 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:55.751 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:55.751 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:55.751 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:55.751 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:55.751 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:55.751 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:55.751 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:55.751 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:55.751 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:55.751 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:55.751 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:55.751 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:55.751 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:55.751 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:55.751 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:55.751 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:56.011 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:08:56.011 00:08:56.011 real 0m4.993s 00:08:56.011 user 0m21.881s 00:08:56.011 sys 0m1.964s 00:08:56.011 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:56.011 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:56.011 ************************************ 00:08:56.011 END TEST nvmf_bdev_io_wait 00:08:56.011 ************************************ 00:08:56.011 14:13:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:56.011 14:13:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:56.011 14:13:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:56.011 14:13:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:56.011 ************************************ 00:08:56.011 START TEST nvmf_queue_depth 00:08:56.011 ************************************ 00:08:56.011 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:56.011 * Looking for test storage... 00:08:56.011 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:56.011 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:56.011 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:08:56.011 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:56.011 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:56.011 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:56.011 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:56.011 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:56.011 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:56.011 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:56.011 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:56.011 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:56.011 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:56.011 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:56.011 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:56.011 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:56.011 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:56.011 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:56.011 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:56.011 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:56.011 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:56.011 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:56.011 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:56.011 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:56.011 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:56.011 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:56.011 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:56.011 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:56.011 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:56.011 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:56.011 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:56.011 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:56.011 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:56.011 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:56.011 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:56.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.011 --rc genhtml_branch_coverage=1 00:08:56.011 --rc genhtml_function_coverage=1 00:08:56.011 --rc genhtml_legend=1 00:08:56.011 --rc geninfo_all_blocks=1 00:08:56.011 --rc geninfo_unexecuted_blocks=1 00:08:56.012 00:08:56.012 ' 00:08:56.012 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:56.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.012 --rc genhtml_branch_coverage=1 00:08:56.012 --rc genhtml_function_coverage=1 00:08:56.012 --rc genhtml_legend=1 00:08:56.012 --rc geninfo_all_blocks=1 00:08:56.012 --rc geninfo_unexecuted_blocks=1 00:08:56.012 00:08:56.012 ' 00:08:56.012 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:56.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.012 --rc genhtml_branch_coverage=1 00:08:56.012 --rc genhtml_function_coverage=1 00:08:56.012 --rc genhtml_legend=1 00:08:56.012 --rc geninfo_all_blocks=1 00:08:56.012 --rc geninfo_unexecuted_blocks=1 00:08:56.012 00:08:56.012 ' 00:08:56.012 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:56.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.012 --rc genhtml_branch_coverage=1 00:08:56.012 --rc genhtml_function_coverage=1 00:08:56.012 --rc genhtml_legend=1 00:08:56.012 --rc geninfo_all_blocks=1 00:08:56.012 --rc geninfo_unexecuted_blocks=1 00:08:56.012 00:08:56.012 ' 00:08:56.012 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:56.012 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:56.012 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:56.012 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:56.012 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:56.012 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:56.012 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:56.012 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:56.012 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:56.012 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:56.012 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:56.012 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:56.012 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:08:56.012 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:08:56.012 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:56.012 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:56.012 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:56.012 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:56.012 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:56.012 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:56.012 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:56.012 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:56.012 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:56.012 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.012 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.012 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.012 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:56.012 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.012 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:56.012 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:56.012 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:56.012 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:56.012 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:56.012 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:56.012 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:56.012 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:56.012 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:56.012 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:56.012 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:56.012 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:56.012 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:56.012 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:56.012 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:56.012 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:56.012 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:56.012 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:56.012 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:56.012 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:56.012 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:56.012 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:56.012 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:56.012 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:56.012 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:56.012 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:56.012 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:56.012 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:56.012 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:56.012 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:56.012 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:56.012 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:56.012 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:56.012 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:56.012 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:56.012 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:56.012 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:56.012 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:56.012 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:56.012 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:56.012 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:56.012 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:56.012 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:56.013 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:56.013 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:56.013 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:56.013 Cannot find device "nvmf_init_br" 00:08:56.013 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:08:56.013 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:56.013 Cannot find device "nvmf_init_br2" 00:08:56.013 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:08:56.013 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:56.013 Cannot find device "nvmf_tgt_br" 00:08:56.013 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:08:56.013 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:56.271 Cannot find device "nvmf_tgt_br2" 00:08:56.271 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:08:56.271 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:56.271 Cannot find device "nvmf_init_br" 00:08:56.271 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:08:56.271 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:56.271 Cannot find device "nvmf_init_br2" 00:08:56.271 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:08:56.271 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:56.271 Cannot find device "nvmf_tgt_br" 00:08:56.271 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:08:56.271 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:56.271 Cannot find device "nvmf_tgt_br2" 00:08:56.271 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:08:56.271 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:56.271 Cannot find device "nvmf_br" 00:08:56.271 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:08:56.271 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:56.271 Cannot find device "nvmf_init_if" 00:08:56.271 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:08:56.271 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:56.271 Cannot find device "nvmf_init_if2" 00:08:56.271 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:08:56.271 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:56.271 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:56.271 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:08:56.271 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:56.271 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:56.271 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:08:56.271 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:56.271 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:56.271 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:56.271 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:56.271 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:56.271 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:56.271 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:56.271 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:56.271 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:56.271 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:56.271 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:56.271 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:56.271 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:56.271 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:56.271 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:56.271 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:56.271 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:56.271 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:56.271 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:56.271 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:56.271 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:56.271 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:56.271 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:56.271 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:56.271 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:56.271 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:56.271 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:56.271 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:56.271 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:56.271 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:56.271 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:56.271 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:56.271 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:56.271 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:56.271 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:08:56.271 00:08:56.271 --- 10.0.0.3 ping statistics --- 00:08:56.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:56.271 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:08:56.271 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:56.271 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:56.271 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:08:56.271 00:08:56.271 --- 10.0.0.4 ping statistics --- 00:08:56.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:56.271 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:08:56.271 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:56.271 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:56.271 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:08:56.271 00:08:56.271 --- 10.0.0.1 ping statistics --- 00:08:56.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:56.271 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:08:56.271 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:56.271 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:56.271 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.039 ms 00:08:56.271 00:08:56.271 --- 10.0.0.2 ping statistics --- 00:08:56.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:56.271 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:08:56.271 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:56.271 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:08:56.271 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:56.272 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:56.272 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:56.272 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:56.272 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:56.272 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:56.272 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:56.272 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:56.272 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:56.272 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:56.272 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:56.272 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=68625 00:08:56.272 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:56.272 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 68625 00:08:56.272 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 68625 ']' 00:08:56.272 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:56.272 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:56.272 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:56.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:56.272 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:56.272 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:56.529 [2024-11-06 14:13:23.974303] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:08:56.529 [2024-11-06 14:13:23.974454] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:56.529 [2024-11-06 14:13:24.150061] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.786 [2024-11-06 14:13:24.231574] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:56.786 [2024-11-06 14:13:24.231621] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:56.786 [2024-11-06 14:13:24.231631] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:56.786 [2024-11-06 14:13:24.231643] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:56.786 [2024-11-06 14:13:24.231651] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:56.786 [2024-11-06 14:13:24.232369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:57.350 14:13:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:57.350 14:13:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:08:57.350 14:13:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:57.350 14:13:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:57.350 14:13:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:57.350 14:13:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:57.350 14:13:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:57.350 14:13:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.350 14:13:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:57.350 [2024-11-06 14:13:24.809485] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:57.350 14:13:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.350 14:13:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:57.350 14:13:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.350 14:13:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:57.350 Malloc0 00:08:57.350 14:13:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.350 14:13:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:57.350 14:13:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.350 14:13:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:57.350 14:13:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.350 14:13:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:57.350 14:13:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.350 14:13:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:57.350 14:13:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.350 14:13:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:57.350 14:13:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.350 14:13:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:57.350 [2024-11-06 14:13:24.875398] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:57.350 14:13:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.350 14:13:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=68676 00:08:57.350 14:13:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:57.351 14:13:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 68676 /var/tmp/bdevperf.sock 00:08:57.351 14:13:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:57.351 14:13:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 68676 ']' 00:08:57.351 14:13:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:57.351 14:13:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:57.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:57.351 14:13:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:57.351 14:13:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:57.351 14:13:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:57.351 [2024-11-06 14:13:24.945735] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:08:57.351 [2024-11-06 14:13:24.945854] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68676 ] 00:08:57.607 [2024-11-06 14:13:25.096963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.607 [2024-11-06 14:13:25.195686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.540 14:13:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:58.540 14:13:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:08:58.540 14:13:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:58.540 14:13:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.540 14:13:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:58.540 NVMe0n1 00:08:58.540 14:13:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.540 14:13:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:58.540 Running I/O for 10 seconds... 00:09:00.404 9636.00 IOPS, 37.64 MiB/s [2024-11-06T14:13:29.410Z] 10145.50 IOPS, 39.63 MiB/s [2024-11-06T14:13:30.341Z] 10320.00 IOPS, 40.31 MiB/s [2024-11-06T14:13:31.272Z] 10508.00 IOPS, 41.05 MiB/s [2024-11-06T14:13:32.203Z] 10658.40 IOPS, 41.63 MiB/s [2024-11-06T14:13:33.134Z] 10757.00 IOPS, 42.02 MiB/s [2024-11-06T14:13:34.065Z] 10824.57 IOPS, 42.28 MiB/s [2024-11-06T14:13:34.997Z] 10858.38 IOPS, 42.42 MiB/s [2024-11-06T14:13:36.408Z] 10871.56 IOPS, 42.47 MiB/s [2024-11-06T14:13:36.408Z] 10868.10 IOPS, 42.45 MiB/s 00:09:08.773 Latency(us) 00:09:08.773 [2024-11-06T14:13:36.408Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:08.773 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:08.773 Verification LBA range: start 0x0 length 0x4000 00:09:08.773 NVMe0n1 : 10.06 10904.65 42.60 0.00 0.00 93555.07 15426.17 65334.35 00:09:08.773 [2024-11-06T14:13:36.408Z] =================================================================================================================== 00:09:08.773 [2024-11-06T14:13:36.408Z] Total : 10904.65 42.60 0.00 0.00 93555.07 15426.17 65334.35 00:09:08.773 { 00:09:08.773 "results": [ 00:09:08.773 { 00:09:08.773 "job": "NVMe0n1", 00:09:08.773 "core_mask": "0x1", 00:09:08.773 "workload": "verify", 00:09:08.773 "status": "finished", 00:09:08.773 "verify_range": { 00:09:08.773 "start": 0, 00:09:08.773 "length": 16384 00:09:08.773 }, 00:09:08.773 "queue_depth": 1024, 00:09:08.773 "io_size": 4096, 00:09:08.773 "runtime": 10.056898, 00:09:08.773 "iops": 10904.654695712336, 00:09:08.773 "mibps": 42.59630740512631, 00:09:08.773 "io_failed": 0, 00:09:08.774 "io_timeout": 0, 00:09:08.774 "avg_latency_us": 93555.06610408713, 00:09:08.774 "min_latency_us": 15426.166153846154, 00:09:08.774 "max_latency_us": 65334.35076923077 00:09:08.774 } 00:09:08.774 ], 00:09:08.774 "core_count": 1 00:09:08.774 } 00:09:08.774 14:13:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 68676 00:09:08.774 14:13:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 68676 ']' 00:09:08.774 14:13:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 68676 00:09:08.774 14:13:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:09:08.774 14:13:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:08.774 14:13:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 68676 00:09:08.774 14:13:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:08.774 14:13:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:08.774 killing process with pid 68676 00:09:08.774 14:13:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 68676' 00:09:08.774 14:13:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 68676 00:09:08.774 Received shutdown signal, test time was about 10.000000 seconds 00:09:08.774 00:09:08.774 Latency(us) 00:09:08.774 [2024-11-06T14:13:36.409Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:08.774 [2024-11-06T14:13:36.409Z] =================================================================================================================== 00:09:08.774 [2024-11-06T14:13:36.409Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:08.774 14:13:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 68676 00:09:09.032 14:13:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:09.032 14:13:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:09.032 14:13:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:09.032 14:13:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:09.290 14:13:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:09.290 14:13:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:09.290 14:13:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:09.290 14:13:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:09.290 rmmod nvme_tcp 00:09:09.290 rmmod nvme_fabrics 00:09:09.290 rmmod nvme_keyring 00:09:09.290 14:13:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:09.290 14:13:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:09.290 14:13:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:09.290 14:13:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 68625 ']' 00:09:09.290 14:13:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 68625 00:09:09.290 14:13:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 68625 ']' 00:09:09.290 14:13:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 68625 00:09:09.290 14:13:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:09:09.290 14:13:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:09.290 14:13:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 68625 00:09:09.290 14:13:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:09:09.290 killing process with pid 68625 00:09:09.290 14:13:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:09:09.290 14:13:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 68625' 00:09:09.290 14:13:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 68625 00:09:09.290 14:13:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 68625 00:09:09.856 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:09.856 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:09.856 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:09.856 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:09.856 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:09:09.856 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:09.856 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:09:09.856 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:09.856 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:09.856 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:09.856 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:09.856 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:09.856 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:10.114 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:10.114 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:10.114 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:10.114 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:10.114 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:10.114 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:10.114 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:10.114 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:10.114 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:10.114 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:10.114 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:10.114 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:10.114 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:10.114 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:09:10.114 00:09:10.114 real 0m14.211s 00:09:10.114 user 0m24.506s 00:09:10.114 sys 0m1.638s 00:09:10.114 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:10.114 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:10.114 ************************************ 00:09:10.114 END TEST nvmf_queue_depth 00:09:10.114 ************************************ 00:09:10.114 14:13:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:10.114 14:13:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:10.114 14:13:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:10.114 14:13:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:10.114 ************************************ 00:09:10.114 START TEST nvmf_target_multipath 00:09:10.114 ************************************ 00:09:10.114 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:10.373 * Looking for test storage... 00:09:10.373 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:10.373 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:10.373 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:09:10.373 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:10.373 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:10.373 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:10.373 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:10.373 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:10.373 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:10.373 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:10.373 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:10.373 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:10.373 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:10.373 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:10.373 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:10.373 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:10.373 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:10.373 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:10.373 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:10.373 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:10.373 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:10.373 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:10.373 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:10.373 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:10.373 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:10.373 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:10.373 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:10.373 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:10.373 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:10.373 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:10.373 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:10.373 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:10.373 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:10.373 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:10.373 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:10.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.373 --rc genhtml_branch_coverage=1 00:09:10.373 --rc genhtml_function_coverage=1 00:09:10.373 --rc genhtml_legend=1 00:09:10.373 --rc geninfo_all_blocks=1 00:09:10.373 --rc geninfo_unexecuted_blocks=1 00:09:10.373 00:09:10.373 ' 00:09:10.374 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:10.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.374 --rc genhtml_branch_coverage=1 00:09:10.374 --rc genhtml_function_coverage=1 00:09:10.374 --rc genhtml_legend=1 00:09:10.374 --rc geninfo_all_blocks=1 00:09:10.374 --rc geninfo_unexecuted_blocks=1 00:09:10.374 00:09:10.374 ' 00:09:10.374 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:10.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.374 --rc genhtml_branch_coverage=1 00:09:10.374 --rc genhtml_function_coverage=1 00:09:10.374 --rc genhtml_legend=1 00:09:10.374 --rc geninfo_all_blocks=1 00:09:10.374 --rc geninfo_unexecuted_blocks=1 00:09:10.374 00:09:10.374 ' 00:09:10.374 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:10.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.374 --rc genhtml_branch_coverage=1 00:09:10.374 --rc genhtml_function_coverage=1 00:09:10.374 --rc genhtml_legend=1 00:09:10.374 --rc geninfo_all_blocks=1 00:09:10.374 --rc geninfo_unexecuted_blocks=1 00:09:10.374 00:09:10.374 ' 00:09:10.374 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:10.374 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:10.374 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:10.374 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:10.374 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:10.374 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:10.374 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:10.374 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:10.374 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:10.374 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:10.374 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:10.374 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:10.374 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:09:10.374 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:09:10.374 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:10.374 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:10.374 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:10.374 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:10.374 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:10.374 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:10.374 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:10.374 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:10.374 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:10.374 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.374 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.374 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.374 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:10.374 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.374 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:10.374 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:10.374 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:10.374 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:10.374 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:10.374 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:10.374 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:10.374 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:10.374 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:10.374 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:10.374 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:10.374 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:10.374 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:10.374 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:10.374 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:10.374 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:10.374 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:10.374 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:10.374 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:10.374 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:10.374 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:10.374 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:10.374 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:10.374 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:10.374 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:10.374 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:10.374 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:10.374 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:10.374 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:10.374 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:10.374 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:10.374 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:10.374 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:10.374 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:10.374 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:10.374 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:10.374 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:10.374 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:10.374 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:10.374 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:10.374 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:10.374 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:10.374 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:10.374 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:10.374 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:10.374 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:10.374 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:10.374 Cannot find device "nvmf_init_br" 00:09:10.374 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:09:10.375 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:10.375 Cannot find device "nvmf_init_br2" 00:09:10.375 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:09:10.375 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:10.375 Cannot find device "nvmf_tgt_br" 00:09:10.375 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:09:10.375 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:10.375 Cannot find device "nvmf_tgt_br2" 00:09:10.375 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:09:10.375 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:10.375 Cannot find device "nvmf_init_br" 00:09:10.375 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:09:10.375 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:10.375 Cannot find device "nvmf_init_br2" 00:09:10.375 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:09:10.375 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:10.375 Cannot find device "nvmf_tgt_br" 00:09:10.375 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:09:10.375 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:10.375 Cannot find device "nvmf_tgt_br2" 00:09:10.375 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:09:10.375 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:10.375 Cannot find device "nvmf_br" 00:09:10.375 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:09:10.375 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:10.375 Cannot find device "nvmf_init_if" 00:09:10.375 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:09:10.375 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:10.375 Cannot find device "nvmf_init_if2" 00:09:10.375 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:09:10.375 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:10.375 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:10.375 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:09:10.375 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:10.375 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:10.375 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:09:10.375 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:10.375 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:10.375 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:10.375 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:10.375 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:10.375 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:10.375 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:10.375 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:10.375 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:10.375 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:10.633 14:13:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:10.634 14:13:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:10.634 14:13:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:10.634 14:13:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:10.634 14:13:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:10.634 14:13:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:10.634 14:13:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:10.634 14:13:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:10.634 14:13:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:10.634 14:13:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:10.634 14:13:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:10.634 14:13:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:10.634 14:13:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:10.634 14:13:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:10.634 14:13:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:10.634 14:13:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:10.634 14:13:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:10.634 14:13:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:10.634 14:13:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:10.634 14:13:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:10.634 14:13:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:10.634 14:13:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:10.634 14:13:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:10.634 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:10.634 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:09:10.634 00:09:10.634 --- 10.0.0.3 ping statistics --- 00:09:10.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.634 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:09:10.634 14:13:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:10.634 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:10.634 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:09:10.634 00:09:10.634 --- 10.0.0.4 ping statistics --- 00:09:10.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.634 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:09:10.634 14:13:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:10.634 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:10.634 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:09:10.634 00:09:10.634 --- 10.0.0.1 ping statistics --- 00:09:10.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.634 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:09:10.634 14:13:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:10.634 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:10.634 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:09:10.634 00:09:10.634 --- 10.0.0.2 ping statistics --- 00:09:10.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.634 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:09:10.634 14:13:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:10.634 14:13:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:09:10.634 14:13:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:10.634 14:13:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:10.634 14:13:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:10.634 14:13:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:10.634 14:13:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:10.634 14:13:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:10.634 14:13:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:10.634 14:13:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:09:10.634 14:13:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:09:10.634 14:13:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:09:10.634 14:13:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:10.634 14:13:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:10.634 14:13:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:10.634 14:13:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=69076 00:09:10.634 14:13:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 69076 00:09:10.634 14:13:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:10.634 14:13:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@833 -- # '[' -z 69076 ']' 00:09:10.634 14:13:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:10.634 14:13:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:10.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:10.634 14:13:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:10.634 14:13:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:10.634 14:13:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:10.634 [2024-11-06 14:13:38.204899] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:09:10.634 [2024-11-06 14:13:38.205025] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:10.892 [2024-11-06 14:13:38.362699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:10.892 [2024-11-06 14:13:38.467315] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:10.892 [2024-11-06 14:13:38.467368] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:10.892 [2024-11-06 14:13:38.467383] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:10.892 [2024-11-06 14:13:38.467394] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:10.892 [2024-11-06 14:13:38.467406] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:10.892 [2024-11-06 14:13:38.468918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:10.892 [2024-11-06 14:13:38.469034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:10.892 [2024-11-06 14:13:38.469164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.892 [2024-11-06 14:13:38.469181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:11.458 14:13:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:11.458 14:13:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@866 -- # return 0 00:09:11.458 14:13:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:11.458 14:13:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:11.458 14:13:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:11.458 14:13:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:11.458 14:13:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:11.716 [2024-11-06 14:13:39.262092] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:11.716 14:13:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:09:11.974 Malloc0 00:09:11.974 14:13:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:09:12.232 14:13:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:12.489 14:13:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:12.747 [2024-11-06 14:13:40.145887] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:12.747 14:13:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:09:12.747 [2024-11-06 14:13:40.354076] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:09:12.747 14:13:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid=9ca3beaa-4765-4780-96a0-c04fd97e7435 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:09:13.005 14:13:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid=9ca3beaa-4765-4780-96a0-c04fd97e7435 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:09:13.263 14:13:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:09:13.263 14:13:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # local i=0 00:09:13.263 14:13:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:09:13.263 14:13:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:09:13.263 14:13:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # sleep 2 00:09:15.160 14:13:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:09:15.160 14:13:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:09:15.417 14:13:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:09:15.417 14:13:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:09:15.417 14:13:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:09:15.417 14:13:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # return 0 00:09:15.417 14:13:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:09:15.417 14:13:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:09:15.417 14:13:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:09:15.417 14:13:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:15.417 14:13:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:09:15.417 14:13:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:09:15.417 14:13:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:09:15.417 14:13:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:09:15.417 14:13:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:09:15.417 14:13:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:09:15.417 14:13:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:09:15.417 14:13:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:09:15.417 14:13:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:09:15.417 14:13:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:09:15.417 14:13:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:15.417 14:13:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:15.417 14:13:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:15.417 14:13:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:15.417 14:13:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:15.417 14:13:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:09:15.417 14:13:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:15.417 14:13:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:15.417 14:13:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:15.417 14:13:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:15.417 14:13:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:15.417 14:13:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:09:15.417 14:13:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=69208 00:09:15.417 14:13:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:09:15.417 14:13:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:15.417 [global] 00:09:15.417 thread=1 00:09:15.417 invalidate=1 00:09:15.417 rw=randrw 00:09:15.417 time_based=1 00:09:15.417 runtime=6 00:09:15.417 ioengine=libaio 00:09:15.417 direct=1 00:09:15.417 bs=4096 00:09:15.417 iodepth=128 00:09:15.417 norandommap=0 00:09:15.417 numjobs=1 00:09:15.417 00:09:15.417 verify_dump=1 00:09:15.417 verify_backlog=512 00:09:15.417 verify_state_save=0 00:09:15.417 do_verify=1 00:09:15.418 verify=crc32c-intel 00:09:15.418 [job0] 00:09:15.418 filename=/dev/nvme0n1 00:09:15.418 Could not set queue depth (nvme0n1) 00:09:15.418 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:15.418 fio-3.35 00:09:15.418 Starting 1 thread 00:09:16.351 14:13:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:16.609 14:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:09:16.609 14:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:09:16.609 14:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:16.609 14:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:16.609 14:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:16.609 14:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:16.609 14:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:16.609 14:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:09:16.609 14:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:16.609 14:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:16.609 14:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:16.609 14:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:16.609 14:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:16.609 14:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:09:17.982 14:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:09:17.982 14:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:17.982 14:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:17.982 14:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:17.982 14:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:09:17.982 14:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:09:17.982 14:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:17.982 14:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:17.982 14:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:17.982 14:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:17.982 14:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:17.982 14:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:09:17.982 14:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:17.982 14:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:17.982 14:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:17.982 14:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:17.982 14:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:17.982 14:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:09:19.369 14:13:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:09:19.369 14:13:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:19.369 14:13:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:19.369 14:13:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 69208 00:09:21.894 00:09:21.894 job0: (groupid=0, jobs=1): err= 0: pid=69235: Wed Nov 6 14:13:49 2024 00:09:21.894 read: IOPS=11.2k, BW=43.9MiB/s (46.1MB/s)(264MiB/6006msec) 00:09:21.895 slat (usec): min=3, max=6153, avg=53.22, stdev=254.32 00:09:21.895 clat (usec): min=1887, max=21083, avg=7643.84, stdev=1460.85 00:09:21.895 lat (usec): min=1907, max=21091, avg=7697.06, stdev=1476.32 00:09:21.895 clat percentiles (usec): 00:09:21.895 | 1.00th=[ 4621], 5.00th=[ 5800], 10.00th=[ 6063], 20.00th=[ 6259], 00:09:21.895 | 30.00th=[ 6849], 40.00th=[ 7439], 50.00th=[ 7701], 60.00th=[ 7898], 00:09:21.895 | 70.00th=[ 8225], 80.00th=[ 8586], 90.00th=[ 9241], 95.00th=[ 9896], 00:09:21.895 | 99.00th=[12125], 99.50th=[12649], 99.90th=[17433], 99.95th=[19006], 00:09:21.895 | 99.99th=[20317] 00:09:21.895 bw ( KiB/s): min= 8200, max=33680, per=53.31%, avg=23989.09, stdev=6983.55, samples=11 00:09:21.895 iops : min= 2050, max= 8420, avg=5997.27, stdev=1745.89, samples=11 00:09:21.895 write: IOPS=6827, BW=26.7MiB/s (28.0MB/s)(141MiB/5281msec); 0 zone resets 00:09:21.895 slat (usec): min=5, max=4681, avg=60.89, stdev=178.19 00:09:21.895 clat (usec): min=1402, max=19005, avg=6814.14, stdev=1243.69 00:09:21.895 lat (usec): min=1420, max=19021, avg=6875.03, stdev=1250.97 00:09:21.895 clat percentiles (usec): 00:09:21.895 | 1.00th=[ 3589], 5.00th=[ 5014], 10.00th=[ 5407], 20.00th=[ 5866], 00:09:21.895 | 30.00th=[ 6325], 40.00th=[ 6718], 50.00th=[ 6915], 60.00th=[ 7177], 00:09:21.895 | 70.00th=[ 7373], 80.00th=[ 7570], 90.00th=[ 7898], 95.00th=[ 8225], 00:09:21.895 | 99.00th=[10683], 99.50th=[11994], 99.90th=[17171], 99.95th=[17433], 00:09:21.895 | 99.99th=[18482] 00:09:21.895 bw ( KiB/s): min= 8464, max=32792, per=87.92%, avg=24009.27, stdev=6630.80, samples=11 00:09:21.895 iops : min= 2116, max= 8198, avg=6002.27, stdev=1657.66, samples=11 00:09:21.895 lat (msec) : 2=0.04%, 4=0.69%, 10=95.67%, 20=3.59%, 50=0.01% 00:09:21.895 cpu : usr=3.06%, sys=15.13%, ctx=6257, majf=0, minf=163 00:09:21.895 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:21.895 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:21.895 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:21.895 issued rwts: total=67568,36054,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:21.895 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:21.895 00:09:21.895 Run status group 0 (all jobs): 00:09:21.895 READ: bw=43.9MiB/s (46.1MB/s), 43.9MiB/s-43.9MiB/s (46.1MB/s-46.1MB/s), io=264MiB (277MB), run=6006-6006msec 00:09:21.895 WRITE: bw=26.7MiB/s (28.0MB/s), 26.7MiB/s-26.7MiB/s (28.0MB/s-28.0MB/s), io=141MiB (148MB), run=5281-5281msec 00:09:21.895 00:09:21.895 Disk stats (read/write): 00:09:21.895 nvme0n1: ios=66679/35475, merge=0/0, ticks=487240/230097, in_queue=717337, util=98.61% 00:09:21.895 14:13:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:09:21.895 14:13:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:09:22.152 14:13:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:09:22.152 14:13:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:22.152 14:13:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:22.152 14:13:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:22.152 14:13:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:22.152 14:13:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:22.152 14:13:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:09:22.152 14:13:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:22.152 14:13:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:22.152 14:13:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:22.152 14:13:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:22.152 14:13:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:09:22.152 14:13:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:09:23.114 14:13:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:09:23.114 14:13:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:23.114 14:13:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:23.114 14:13:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:09:23.114 14:13:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=69365 00:09:23.114 14:13:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:09:23.114 14:13:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:23.114 [global] 00:09:23.114 thread=1 00:09:23.114 invalidate=1 00:09:23.114 rw=randrw 00:09:23.114 time_based=1 00:09:23.114 runtime=6 00:09:23.114 ioengine=libaio 00:09:23.114 direct=1 00:09:23.114 bs=4096 00:09:23.114 iodepth=128 00:09:23.114 norandommap=0 00:09:23.114 numjobs=1 00:09:23.114 00:09:23.114 verify_dump=1 00:09:23.114 verify_backlog=512 00:09:23.114 verify_state_save=0 00:09:23.114 do_verify=1 00:09:23.114 verify=crc32c-intel 00:09:23.114 [job0] 00:09:23.114 filename=/dev/nvme0n1 00:09:23.114 Could not set queue depth (nvme0n1) 00:09:23.114 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:23.114 fio-3.35 00:09:23.114 Starting 1 thread 00:09:24.047 14:13:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:24.304 14:13:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:09:24.562 14:13:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:09:24.562 14:13:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:24.562 14:13:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:24.562 14:13:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:24.562 14:13:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:24.562 14:13:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:24.562 14:13:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:09:24.562 14:13:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:24.562 14:13:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:24.562 14:13:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:24.562 14:13:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:24.562 14:13:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:24.562 14:13:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:09:25.492 14:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:09:25.493 14:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:25.493 14:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:25.493 14:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:25.750 14:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:09:26.006 14:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:09:26.006 14:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:26.006 14:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:26.006 14:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:26.006 14:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:26.006 14:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:26.006 14:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:09:26.006 14:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:26.006 14:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:26.006 14:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:26.006 14:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:26.006 14:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:26.006 14:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:09:26.938 14:13:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:09:26.938 14:13:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:26.938 14:13:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:26.938 14:13:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 69365 00:09:29.463 00:09:29.463 job0: (groupid=0, jobs=1): err= 0: pid=69386: Wed Nov 6 14:13:56 2024 00:09:29.463 read: IOPS=14.7k, BW=57.5MiB/s (60.3MB/s)(345MiB/6004msec) 00:09:29.463 slat (usec): min=3, max=4472, avg=36.17, stdev=190.05 00:09:29.463 clat (usec): min=140, max=18585, avg=5989.01, stdev=1957.18 00:09:29.463 lat (usec): min=149, max=18592, avg=6025.18, stdev=1969.78 00:09:29.463 clat percentiles (usec): 00:09:29.463 | 1.00th=[ 502], 5.00th=[ 1074], 10.00th=[ 3359], 20.00th=[ 5473], 00:09:29.463 | 30.00th=[ 5997], 40.00th=[ 6063], 50.00th=[ 6194], 60.00th=[ 6390], 00:09:29.463 | 70.00th=[ 6718], 80.00th=[ 7111], 90.00th=[ 7767], 95.00th=[ 8586], 00:09:29.463 | 99.00th=[10290], 99.50th=[11469], 99.90th=[13829], 99.95th=[15926], 00:09:29.463 | 99.99th=[17957] 00:09:29.463 bw ( KiB/s): min=10536, max=52960, per=51.67%, avg=30427.64, stdev=12556.83, samples=11 00:09:29.463 iops : min= 2634, max=13240, avg=7606.91, stdev=3139.21, samples=11 00:09:29.463 write: IOPS=9122, BW=35.6MiB/s (37.4MB/s)(179MiB/5035msec); 0 zone resets 00:09:29.463 slat (usec): min=7, max=2904, avg=40.64, stdev=120.42 00:09:29.463 clat (usec): min=109, max=17293, avg=5072.73, stdev=1959.73 00:09:29.463 lat (usec): min=133, max=17310, avg=5113.36, stdev=1968.40 00:09:29.463 clat percentiles (usec): 00:09:29.463 | 1.00th=[ 322], 5.00th=[ 635], 10.00th=[ 1565], 20.00th=[ 3621], 00:09:29.463 | 30.00th=[ 5080], 40.00th=[ 5407], 50.00th=[ 5604], 60.00th=[ 5800], 00:09:29.463 | 70.00th=[ 5997], 80.00th=[ 6194], 90.00th=[ 6718], 95.00th=[ 7504], 00:09:29.463 | 99.00th=[ 8848], 99.50th=[10421], 99.90th=[14877], 99.95th=[16909], 00:09:29.463 | 99.99th=[17171] 00:09:29.463 bw ( KiB/s): min=11136, max=53248, per=83.68%, avg=30533.09, stdev=12261.22, samples=11 00:09:29.463 iops : min= 2784, max=13312, avg=7633.27, stdev=3065.30, samples=11 00:09:29.463 lat (usec) : 250=0.20%, 500=1.41%, 750=2.41%, 1000=1.89% 00:09:29.463 lat (msec) : 2=2.73%, 4=7.14%, 10=83.15%, 20=1.08% 00:09:29.463 cpu : usr=3.83%, sys=17.29%, ctx=10763, majf=0, minf=127 00:09:29.463 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:29.463 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.463 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:29.463 issued rwts: total=88384,45930,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:29.463 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:29.463 00:09:29.463 Run status group 0 (all jobs): 00:09:29.463 READ: bw=57.5MiB/s (60.3MB/s), 57.5MiB/s-57.5MiB/s (60.3MB/s-60.3MB/s), io=345MiB (362MB), run=6004-6004msec 00:09:29.463 WRITE: bw=35.6MiB/s (37.4MB/s), 35.6MiB/s-35.6MiB/s (37.4MB/s-37.4MB/s), io=179MiB (188MB), run=5035-5035msec 00:09:29.463 00:09:29.463 Disk stats (read/write): 00:09:29.463 nvme0n1: ios=87392/45128, merge=0/0, ticks=498666/217566, in_queue=716232, util=98.48% 00:09:29.463 14:13:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:29.463 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:29.463 14:13:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:29.463 14:13:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1221 -- # local i=0 00:09:29.463 14:13:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:09:29.463 14:13:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:29.463 14:13:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:09:29.463 14:13:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:29.463 14:13:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1233 -- # return 0 00:09:29.463 14:13:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:29.722 14:13:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:09:29.722 14:13:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:09:29.722 14:13:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:09:29.722 14:13:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:09:29.722 14:13:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:29.722 14:13:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:29.722 14:13:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:29.722 14:13:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:29.722 14:13:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:29.722 14:13:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:29.722 rmmod nvme_tcp 00:09:29.722 rmmod nvme_fabrics 00:09:29.722 rmmod nvme_keyring 00:09:29.722 14:13:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:29.722 14:13:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:29.722 14:13:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:29.722 14:13:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 69076 ']' 00:09:29.722 14:13:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 69076 00:09:29.722 14:13:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@952 -- # '[' -z 69076 ']' 00:09:29.722 14:13:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # kill -0 69076 00:09:29.722 14:13:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@957 -- # uname 00:09:29.722 14:13:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:29.722 14:13:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69076 00:09:29.722 killing process with pid 69076 00:09:29.722 14:13:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:29.722 14:13:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:29.722 14:13:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69076' 00:09:29.722 14:13:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@971 -- # kill 69076 00:09:29.722 14:13:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@976 -- # wait 69076 00:09:30.654 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:30.654 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:30.654 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:30.654 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:30.654 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:30.654 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:30.654 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:30.654 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:30.654 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:30.654 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:30.654 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:30.654 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:30.654 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:30.654 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:30.654 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:30.654 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:30.654 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:30.654 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:30.654 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:30.654 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:30.654 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:30.654 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:30.654 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:30.654 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:30.654 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:30.654 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:30.654 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:09:30.654 00:09:30.654 real 0m20.585s 00:09:30.654 user 1m19.156s 00:09:30.654 sys 0m4.856s 00:09:30.654 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:30.654 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:30.654 ************************************ 00:09:30.654 END TEST nvmf_target_multipath 00:09:30.654 ************************************ 00:09:30.913 14:13:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:30.913 14:13:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:30.913 14:13:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:30.913 14:13:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:30.913 ************************************ 00:09:30.913 START TEST nvmf_zcopy 00:09:30.913 ************************************ 00:09:30.913 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:30.913 * Looking for test storage... 00:09:30.913 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:30.913 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:30.913 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:09:30.913 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:30.913 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:30.913 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:30.913 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:30.913 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:30.913 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:30.913 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:30.913 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:30.913 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:30.913 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:30.913 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:30.913 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:30.913 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:30.913 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:30.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.914 --rc genhtml_branch_coverage=1 00:09:30.914 --rc genhtml_function_coverage=1 00:09:30.914 --rc genhtml_legend=1 00:09:30.914 --rc geninfo_all_blocks=1 00:09:30.914 --rc geninfo_unexecuted_blocks=1 00:09:30.914 00:09:30.914 ' 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:30.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.914 --rc genhtml_branch_coverage=1 00:09:30.914 --rc genhtml_function_coverage=1 00:09:30.914 --rc genhtml_legend=1 00:09:30.914 --rc geninfo_all_blocks=1 00:09:30.914 --rc geninfo_unexecuted_blocks=1 00:09:30.914 00:09:30.914 ' 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:30.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.914 --rc genhtml_branch_coverage=1 00:09:30.914 --rc genhtml_function_coverage=1 00:09:30.914 --rc genhtml_legend=1 00:09:30.914 --rc geninfo_all_blocks=1 00:09:30.914 --rc geninfo_unexecuted_blocks=1 00:09:30.914 00:09:30.914 ' 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:30.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.914 --rc genhtml_branch_coverage=1 00:09:30.914 --rc genhtml_function_coverage=1 00:09:30.914 --rc genhtml_legend=1 00:09:30.914 --rc geninfo_all_blocks=1 00:09:30.914 --rc geninfo_unexecuted_blocks=1 00:09:30.914 00:09:30.914 ' 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:30.914 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:30.914 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:30.915 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:30.915 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:30.915 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:30.915 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:30.915 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:30.915 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:30.915 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:30.915 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:30.915 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:30.915 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:30.915 Cannot find device "nvmf_init_br" 00:09:30.915 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:09:30.915 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:30.915 Cannot find device "nvmf_init_br2" 00:09:30.915 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:09:30.915 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:30.915 Cannot find device "nvmf_tgt_br" 00:09:30.915 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:09:30.915 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:30.915 Cannot find device "nvmf_tgt_br2" 00:09:30.915 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:09:30.915 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:30.915 Cannot find device "nvmf_init_br" 00:09:30.915 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:09:30.915 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:30.915 Cannot find device "nvmf_init_br2" 00:09:30.915 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:09:30.915 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:31.173 Cannot find device "nvmf_tgt_br" 00:09:31.173 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:09:31.173 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:31.173 Cannot find device "nvmf_tgt_br2" 00:09:31.173 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:09:31.173 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:31.173 Cannot find device "nvmf_br" 00:09:31.173 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:09:31.173 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:31.173 Cannot find device "nvmf_init_if" 00:09:31.173 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:09:31.173 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:31.173 Cannot find device "nvmf_init_if2" 00:09:31.173 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:09:31.173 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:31.173 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:31.173 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:09:31.173 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:31.173 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:31.173 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:09:31.173 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:31.173 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:31.173 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:31.173 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:31.173 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:31.173 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:31.173 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:31.173 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:31.173 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:31.173 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:31.173 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:31.173 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:31.173 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:31.173 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:31.173 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:31.173 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:31.173 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:31.173 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:31.173 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:31.173 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:31.173 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:31.173 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:31.173 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:31.173 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:31.173 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:31.173 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:31.173 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:31.173 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:31.173 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:31.173 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:31.173 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:31.173 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:31.173 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:31.173 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:31.173 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:09:31.173 00:09:31.173 --- 10.0.0.3 ping statistics --- 00:09:31.173 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.173 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:09:31.173 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:31.173 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:31.173 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.026 ms 00:09:31.173 00:09:31.173 --- 10.0.0.4 ping statistics --- 00:09:31.173 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.173 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:09:31.174 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:31.174 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:31.174 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:09:31.174 00:09:31.174 --- 10.0.0.1 ping statistics --- 00:09:31.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.174 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:09:31.174 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:31.174 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:31.174 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms 00:09:31.174 00:09:31.174 --- 10.0.0.2 ping statistics --- 00:09:31.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.174 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:09:31.174 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:31.174 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:09:31.174 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:31.174 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:31.174 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:31.174 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:31.174 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:31.174 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:31.174 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:31.431 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:31.431 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:31.431 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:31.431 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:31.431 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=69721 00:09:31.431 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:31.431 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 69721 00:09:31.431 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 69721 ']' 00:09:31.431 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.431 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:31.431 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.431 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:31.431 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:31.431 [2024-11-06 14:13:58.874643] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:09:31.431 [2024-11-06 14:13:58.874759] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:31.432 [2024-11-06 14:13:59.039166] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.690 [2024-11-06 14:13:59.138834] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:31.690 [2024-11-06 14:13:59.138883] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:31.690 [2024-11-06 14:13:59.138894] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:31.690 [2024-11-06 14:13:59.138909] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:31.690 [2024-11-06 14:13:59.138918] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:31.690 [2024-11-06 14:13:59.139800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:32.256 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:32.256 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:09:32.256 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:32.256 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:32.256 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:32.256 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:32.256 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:32.256 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:32.256 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.256 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:32.256 [2024-11-06 14:13:59.799196] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:32.256 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.256 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:32.256 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.256 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:32.256 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.256 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:32.256 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.256 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:32.256 [2024-11-06 14:13:59.815324] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:32.256 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.256 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:32.256 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.256 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:32.256 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.256 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:32.256 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.256 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:32.256 malloc0 00:09:32.256 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.256 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:32.256 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.256 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:32.256 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.256 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:32.256 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:32.256 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:32.256 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:32.256 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:32.256 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:32.256 { 00:09:32.256 "params": { 00:09:32.256 "name": "Nvme$subsystem", 00:09:32.256 "trtype": "$TEST_TRANSPORT", 00:09:32.256 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:32.256 "adrfam": "ipv4", 00:09:32.256 "trsvcid": "$NVMF_PORT", 00:09:32.256 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:32.256 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:32.256 "hdgst": ${hdgst:-false}, 00:09:32.256 "ddgst": ${ddgst:-false} 00:09:32.256 }, 00:09:32.256 "method": "bdev_nvme_attach_controller" 00:09:32.256 } 00:09:32.256 EOF 00:09:32.256 )") 00:09:32.256 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:32.256 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:32.256 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:32.256 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:32.256 "params": { 00:09:32.256 "name": "Nvme1", 00:09:32.256 "trtype": "tcp", 00:09:32.256 "traddr": "10.0.0.3", 00:09:32.256 "adrfam": "ipv4", 00:09:32.257 "trsvcid": "4420", 00:09:32.257 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:32.257 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:32.257 "hdgst": false, 00:09:32.257 "ddgst": false 00:09:32.257 }, 00:09:32.257 "method": "bdev_nvme_attach_controller" 00:09:32.257 }' 00:09:32.515 [2024-11-06 14:13:59.944570] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:09:32.515 [2024-11-06 14:13:59.944696] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69772 ] 00:09:32.515 [2024-11-06 14:14:00.103868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.773 [2024-11-06 14:14:00.204394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.031 Running I/O for 10 seconds... 00:09:34.908 6141.00 IOPS, 47.98 MiB/s [2024-11-06T14:14:03.915Z] 6759.00 IOPS, 52.80 MiB/s [2024-11-06T14:14:04.848Z] 7102.33 IOPS, 55.49 MiB/s [2024-11-06T14:14:05.830Z] 7241.00 IOPS, 56.57 MiB/s [2024-11-06T14:14:06.773Z] 7343.20 IOPS, 57.37 MiB/s [2024-11-06T14:14:07.705Z] 7411.83 IOPS, 57.90 MiB/s [2024-11-06T14:14:08.640Z] 7479.57 IOPS, 58.43 MiB/s [2024-11-06T14:14:09.602Z] 7533.00 IOPS, 58.85 MiB/s [2024-11-06T14:14:10.538Z] 7570.78 IOPS, 59.15 MiB/s [2024-11-06T14:14:10.538Z] 7595.40 IOPS, 59.34 MiB/s 00:09:42.903 Latency(us) 00:09:42.903 [2024-11-06T14:14:10.538Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:42.903 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:42.903 Verification LBA range: start 0x0 length 0x1000 00:09:42.903 Nvme1n1 : 10.01 7597.47 59.36 0.00 0.00 16798.03 2205.54 27222.65 00:09:42.903 [2024-11-06T14:14:10.538Z] =================================================================================================================== 00:09:42.903 [2024-11-06T14:14:10.538Z] Total : 7597.47 59.36 0.00 0.00 16798.03 2205.54 27222.65 00:09:43.471 14:14:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=69902 00:09:43.471 14:14:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:43.471 14:14:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:43.471 14:14:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:43.471 14:14:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:43.471 14:14:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:43.471 14:14:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:43.471 14:14:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:43.471 14:14:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:43.471 { 00:09:43.471 "params": { 00:09:43.471 "name": "Nvme$subsystem", 00:09:43.471 "trtype": "$TEST_TRANSPORT", 00:09:43.471 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:43.471 "adrfam": "ipv4", 00:09:43.471 "trsvcid": "$NVMF_PORT", 00:09:43.471 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:43.471 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:43.471 "hdgst": ${hdgst:-false}, 00:09:43.471 "ddgst": ${ddgst:-false} 00:09:43.471 }, 00:09:43.471 "method": "bdev_nvme_attach_controller" 00:09:43.471 } 00:09:43.471 EOF 00:09:43.471 )") 00:09:43.730 14:14:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:43.730 [2024-11-06 14:14:11.107990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.730 [2024-11-06 14:14:11.108338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.730 14:14:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:43.730 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:43.730 14:14:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:43.730 14:14:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:43.730 "params": { 00:09:43.730 "name": "Nvme1", 00:09:43.730 "trtype": "tcp", 00:09:43.730 "traddr": "10.0.0.3", 00:09:43.730 "adrfam": "ipv4", 00:09:43.730 "trsvcid": "4420", 00:09:43.730 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:43.730 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:43.730 "hdgst": false, 00:09:43.730 "ddgst": false 00:09:43.730 }, 00:09:43.730 "method": "bdev_nvme_attach_controller" 00:09:43.730 }' 00:09:43.730 [2024-11-06 14:14:11.119965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.730 [2024-11-06 14:14:11.119996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.730 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:43.730 [2024-11-06 14:14:11.127963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.730 [2024-11-06 14:14:11.127989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.730 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:43.730 [2024-11-06 14:14:11.135916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.730 [2024-11-06 14:14:11.135941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.730 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:43.730 [2024-11-06 14:14:11.143931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.730 [2024-11-06 14:14:11.143968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.730 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:43.730 [2024-11-06 14:14:11.155942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.730 [2024-11-06 14:14:11.155979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.730 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:43.730 [2024-11-06 14:14:11.163926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.730 [2024-11-06 14:14:11.163960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.730 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:43.730 [2024-11-06 14:14:11.171933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.730 [2024-11-06 14:14:11.171964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.730 [2024-11-06 14:14:11.173763] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:09:43.730 [2024-11-06 14:14:11.173893] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69902 ] 00:09:43.730 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:43.730 [2024-11-06 14:14:11.179943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.730 [2024-11-06 14:14:11.179977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.730 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:43.730 [2024-11-06 14:14:11.187926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.730 [2024-11-06 14:14:11.187957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.730 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:43.730 [2024-11-06 14:14:11.195937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.730 [2024-11-06 14:14:11.195968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.730 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:43.730 [2024-11-06 14:14:11.203926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.730 [2024-11-06 14:14:11.203955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.730 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:43.730 [2024-11-06 14:14:11.211944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.730 [2024-11-06 14:14:11.211974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.730 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:43.730 [2024-11-06 14:14:11.219941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.730 [2024-11-06 14:14:11.219971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.730 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:43.730 [2024-11-06 14:14:11.227938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.730 [2024-11-06 14:14:11.227969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.730 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:43.730 [2024-11-06 14:14:11.235958] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.730 [2024-11-06 14:14:11.235980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.730 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:43.730 [2024-11-06 14:14:11.243959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.730 [2024-11-06 14:14:11.243981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.730 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:43.730 [2024-11-06 14:14:11.251944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.730 [2024-11-06 14:14:11.251977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.730 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:43.730 [2024-11-06 14:14:11.259976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.730 [2024-11-06 14:14:11.260000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.730 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:43.730 [2024-11-06 14:14:11.267945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.730 [2024-11-06 14:14:11.267982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.730 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:43.730 [2024-11-06 14:14:11.275983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.730 [2024-11-06 14:14:11.276006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.730 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:43.730 [2024-11-06 14:14:11.283971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.731 [2024-11-06 14:14:11.283992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.731 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:43.731 [2024-11-06 14:14:11.291963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.731 [2024-11-06 14:14:11.291985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.731 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:43.731 [2024-11-06 14:14:11.299980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.731 [2024-11-06 14:14:11.300003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.731 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:43.731 [2024-11-06 14:14:11.307989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.731 [2024-11-06 14:14:11.308012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.731 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:43.731 [2024-11-06 14:14:11.315974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.731 [2024-11-06 14:14:11.315998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.731 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:43.731 [2024-11-06 14:14:11.323986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.731 [2024-11-06 14:14:11.324010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.731 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:43.731 [2024-11-06 14:14:11.330009] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.731 [2024-11-06 14:14:11.331981] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.731 [2024-11-06 14:14:11.332001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.731 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:43.731 [2024-11-06 14:14:11.340002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.731 [2024-11-06 14:14:11.340027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.731 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:43.731 [2024-11-06 14:14:11.347993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.731 [2024-11-06 14:14:11.348017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.731 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:43.731 [2024-11-06 14:14:11.355985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.731 [2024-11-06 14:14:11.356008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.731 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:43.989 [2024-11-06 14:14:11.364000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.989 [2024-11-06 14:14:11.364023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.989 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:43.989 [2024-11-06 14:14:11.372012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.989 [2024-11-06 14:14:11.372035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.989 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:43.989 [2024-11-06 14:14:11.379989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.989 [2024-11-06 14:14:11.380013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.989 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:43.989 [2024-11-06 14:14:11.388009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.989 [2024-11-06 14:14:11.388034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.989 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:43.989 [2024-11-06 14:14:11.395999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.989 [2024-11-06 14:14:11.396022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.989 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:43.989 [2024-11-06 14:14:11.404012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.989 [2024-11-06 14:14:11.404036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.989 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:43.989 [2024-11-06 14:14:11.412012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.989 [2024-11-06 14:14:11.412035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.989 [2024-11-06 14:14:11.413311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.989 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:43.989 [2024-11-06 14:14:11.420010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.989 [2024-11-06 14:14:11.420033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.989 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:43.989 [2024-11-06 14:14:11.428027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.989 [2024-11-06 14:14:11.428054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.989 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:43.989 [2024-11-06 14:14:11.436019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.989 [2024-11-06 14:14:11.436043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.989 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:43.989 [2024-11-06 14:14:11.444006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.989 [2024-11-06 14:14:11.444028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.989 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:43.989 [2024-11-06 14:14:11.452021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.989 [2024-11-06 14:14:11.452043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.990 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:43.990 [2024-11-06 14:14:11.460010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.990 [2024-11-06 14:14:11.460032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.990 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:43.990 [2024-11-06 14:14:11.468046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.990 [2024-11-06 14:14:11.468075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.990 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:43.990 [2024-11-06 14:14:11.476038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.990 [2024-11-06 14:14:11.476063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.990 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:43.990 [2024-11-06 14:14:11.484016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.990 [2024-11-06 14:14:11.484037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.990 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:43.990 [2024-11-06 14:14:11.492025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.990 [2024-11-06 14:14:11.492047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.990 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:43.990 [2024-11-06 14:14:11.500041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.990 [2024-11-06 14:14:11.500062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.990 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:43.990 [2024-11-06 14:14:11.508022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.990 [2024-11-06 14:14:11.508043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.990 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:43.990 [2024-11-06 14:14:11.516034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.990 [2024-11-06 14:14:11.516055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.990 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:43.990 [2024-11-06 14:14:11.524023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.990 [2024-11-06 14:14:11.524044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.990 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:43.990 [2024-11-06 14:14:11.532038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.990 [2024-11-06 14:14:11.532059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.990 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:43.990 [2024-11-06 14:14:11.540068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.990 [2024-11-06 14:14:11.540095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.990 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:43.990 [2024-11-06 14:14:11.548052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.990 [2024-11-06 14:14:11.548079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.990 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:43.990 [2024-11-06 14:14:11.556067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.990 [2024-11-06 14:14:11.556090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.990 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:43.990 [2024-11-06 14:14:11.564074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.990 [2024-11-06 14:14:11.564098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.990 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:43.990 [2024-11-06 14:14:11.572056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.990 [2024-11-06 14:14:11.572081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.990 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:43.990 [2024-11-06 14:14:11.580073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.990 [2024-11-06 14:14:11.580097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.990 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:43.990 [2024-11-06 14:14:11.588055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.990 [2024-11-06 14:14:11.588078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.990 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:43.990 [2024-11-06 14:14:11.596069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.990 [2024-11-06 14:14:11.596091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.990 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:43.990 [2024-11-06 14:14:11.608292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.990 [2024-11-06 14:14:11.608326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.990 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:43.990 [2024-11-06 14:14:11.620273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.990 [2024-11-06 14:14:11.620301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.248 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.248 [2024-11-06 14:14:11.628280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.248 [2024-11-06 14:14:11.628306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.248 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.248 [2024-11-06 14:14:11.636282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.248 [2024-11-06 14:14:11.636308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.248 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.248 [2024-11-06 14:14:11.644271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.248 [2024-11-06 14:14:11.644297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.248 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.248 [2024-11-06 14:14:11.652284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.248 [2024-11-06 14:14:11.652310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.248 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.248 [2024-11-06 14:14:11.660269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.248 [2024-11-06 14:14:11.660292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.248 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.248 [2024-11-06 14:14:11.668353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.248 [2024-11-06 14:14:11.668386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.248 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.248 [2024-11-06 14:14:11.676332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.248 [2024-11-06 14:14:11.676358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.248 Running I/O for 5 seconds... 00:09:44.248 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.248 [2024-11-06 14:14:11.684349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.248 [2024-11-06 14:14:11.684374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.248 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.248 [2024-11-06 14:14:11.696084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.248 [2024-11-06 14:14:11.696113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.248 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.248 [2024-11-06 14:14:11.705640] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.248 [2024-11-06 14:14:11.705672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.248 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.248 [2024-11-06 14:14:11.714886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.248 [2024-11-06 14:14:11.714916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.248 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.248 [2024-11-06 14:14:11.724146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.248 [2024-11-06 14:14:11.724172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.248 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.248 [2024-11-06 14:14:11.734129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.248 [2024-11-06 14:14:11.734156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.248 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.248 [2024-11-06 14:14:11.743932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.248 [2024-11-06 14:14:11.743968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.248 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.248 [2024-11-06 14:14:11.754043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.248 [2024-11-06 14:14:11.754070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.248 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.248 [2024-11-06 14:14:11.764056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.248 [2024-11-06 14:14:11.764086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.248 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.248 [2024-11-06 14:14:11.773237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.248 [2024-11-06 14:14:11.773268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.248 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.249 [2024-11-06 14:14:11.783243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.249 [2024-11-06 14:14:11.783272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.249 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.249 [2024-11-06 14:14:11.799235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.249 [2024-11-06 14:14:11.799263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.249 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.249 [2024-11-06 14:14:11.810536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.249 [2024-11-06 14:14:11.810571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.249 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.249 [2024-11-06 14:14:11.820161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.249 [2024-11-06 14:14:11.820189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.249 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.249 [2024-11-06 14:14:11.830303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.249 [2024-11-06 14:14:11.830333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.249 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.249 [2024-11-06 14:14:11.839405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.249 [2024-11-06 14:14:11.839431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.249 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.249 [2024-11-06 14:14:11.848757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.249 [2024-11-06 14:14:11.848785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.249 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.249 [2024-11-06 14:14:11.858723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.249 [2024-11-06 14:14:11.858751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.249 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.249 [2024-11-06 14:14:11.868000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.249 [2024-11-06 14:14:11.868027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.249 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.249 [2024-11-06 14:14:11.878015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.249 [2024-11-06 14:14:11.878042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.249 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.506 [2024-11-06 14:14:11.887391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.506 [2024-11-06 14:14:11.887422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.506 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.506 [2024-11-06 14:14:11.897442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.506 [2024-11-06 14:14:11.897470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.506 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.506 [2024-11-06 14:14:11.906770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.506 [2024-11-06 14:14:11.906799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.506 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.506 [2024-11-06 14:14:11.916915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.506 [2024-11-06 14:14:11.916946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.507 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.507 [2024-11-06 14:14:11.927052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.507 [2024-11-06 14:14:11.927081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.507 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.507 [2024-11-06 14:14:11.943141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.507 [2024-11-06 14:14:11.943177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.507 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.507 [2024-11-06 14:14:11.954298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.507 [2024-11-06 14:14:11.954327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.507 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.507 [2024-11-06 14:14:11.964387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.507 [2024-11-06 14:14:11.964413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.507 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.507 [2024-11-06 14:14:11.973718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.507 [2024-11-06 14:14:11.973746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.507 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.507 [2024-11-06 14:14:11.983768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.507 [2024-11-06 14:14:11.983806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.507 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.507 [2024-11-06 14:14:11.993843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.507 [2024-11-06 14:14:11.993875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.507 2024/11/06 14:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.507 [2024-11-06 14:14:12.003911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.507 [2024-11-06 14:14:12.003939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.507 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.507 [2024-11-06 14:14:12.013946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.507 [2024-11-06 14:14:12.013983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.507 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.507 [2024-11-06 14:14:12.023144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.507 [2024-11-06 14:14:12.023172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.507 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.507 [2024-11-06 14:14:12.032378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.507 [2024-11-06 14:14:12.032404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.507 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.507 [2024-11-06 14:14:12.042547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.507 [2024-11-06 14:14:12.042575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.507 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.507 [2024-11-06 14:14:12.051809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.507 [2024-11-06 14:14:12.051835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.507 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.507 [2024-11-06 14:14:12.061626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.507 [2024-11-06 14:14:12.061653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.507 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.507 [2024-11-06 14:14:12.070794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.507 [2024-11-06 14:14:12.070822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.507 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.507 [2024-11-06 14:14:12.080854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.507 [2024-11-06 14:14:12.080884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.507 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.507 [2024-11-06 14:14:12.090728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.507 [2024-11-06 14:14:12.090755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.507 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.507 [2024-11-06 14:14:12.097969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.507 [2024-11-06 14:14:12.097995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.507 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.507 [2024-11-06 14:14:12.109343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.507 [2024-11-06 14:14:12.109373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.507 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.507 [2024-11-06 14:14:12.119455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.507 [2024-11-06 14:14:12.119483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.507 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.507 [2024-11-06 14:14:12.128830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.507 [2024-11-06 14:14:12.128863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.507 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.507 [2024-11-06 14:14:12.137975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.507 [2024-11-06 14:14:12.138003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.766 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.766 [2024-11-06 14:14:12.147255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.766 [2024-11-06 14:14:12.147283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.766 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.766 [2024-11-06 14:14:12.156604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.766 [2024-11-06 14:14:12.156632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.766 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.766 [2024-11-06 14:14:12.165863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.766 [2024-11-06 14:14:12.165893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.766 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.766 [2024-11-06 14:14:12.175761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.766 [2024-11-06 14:14:12.175805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.766 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.766 [2024-11-06 14:14:12.185839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.766 [2024-11-06 14:14:12.185868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.766 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.766 [2024-11-06 14:14:12.195130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.766 [2024-11-06 14:14:12.195157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.766 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.766 [2024-11-06 14:14:12.204430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.766 [2024-11-06 14:14:12.204457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.766 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.766 [2024-11-06 14:14:12.213605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.766 [2024-11-06 14:14:12.213633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.766 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.766 [2024-11-06 14:14:12.220931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.766 [2024-11-06 14:14:12.220969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.766 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.766 [2024-11-06 14:14:12.231692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.766 [2024-11-06 14:14:12.231723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.766 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.766 [2024-11-06 14:14:12.241962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.766 [2024-11-06 14:14:12.241990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.766 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.766 [2024-11-06 14:14:12.251356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.766 [2024-11-06 14:14:12.251384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.766 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.766 [2024-11-06 14:14:12.260704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.766 [2024-11-06 14:14:12.260735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.766 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.766 [2024-11-06 14:14:12.270511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.766 [2024-11-06 14:14:12.270539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.766 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.766 [2024-11-06 14:14:12.277881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.766 [2024-11-06 14:14:12.277910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.766 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.766 [2024-11-06 14:14:12.288129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.766 [2024-11-06 14:14:12.288156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.766 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.767 [2024-11-06 14:14:12.297636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.767 [2024-11-06 14:14:12.297665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.767 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.767 [2024-11-06 14:14:12.307633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.767 [2024-11-06 14:14:12.307663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.767 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.767 [2024-11-06 14:14:12.316882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.767 [2024-11-06 14:14:12.316919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.767 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.767 [2024-11-06 14:14:12.326736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.767 [2024-11-06 14:14:12.326769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.767 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.767 [2024-11-06 14:14:12.335987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.767 [2024-11-06 14:14:12.336016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.767 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.767 [2024-11-06 14:14:12.345137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.767 [2024-11-06 14:14:12.345169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.767 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.767 [2024-11-06 14:14:12.354424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.767 [2024-11-06 14:14:12.354455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.767 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.767 [2024-11-06 14:14:12.363851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.767 [2024-11-06 14:14:12.363882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.767 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.767 [2024-11-06 14:14:12.373224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.767 [2024-11-06 14:14:12.373254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.767 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.767 [2024-11-06 14:14:12.382512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.767 [2024-11-06 14:14:12.382541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.767 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:44.767 [2024-11-06 14:14:12.391637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.767 [2024-11-06 14:14:12.391666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.767 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.024 [2024-11-06 14:14:12.401186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.024 [2024-11-06 14:14:12.401219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.024 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.024 [2024-11-06 14:14:12.411306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.024 [2024-11-06 14:14:12.411335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.024 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.024 [2024-11-06 14:14:12.421153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.024 [2024-11-06 14:14:12.421182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.024 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.024 [2024-11-06 14:14:12.430356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.024 [2024-11-06 14:14:12.430386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.024 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.024 [2024-11-06 14:14:12.440196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.024 [2024-11-06 14:14:12.440223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.025 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.025 [2024-11-06 14:14:12.450095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.025 [2024-11-06 14:14:12.450122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.025 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.025 [2024-11-06 14:14:12.457860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.025 [2024-11-06 14:14:12.457891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.025 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.025 [2024-11-06 14:14:12.468778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.025 [2024-11-06 14:14:12.468807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.025 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.025 [2024-11-06 14:14:12.478268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.025 [2024-11-06 14:14:12.478299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.025 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.025 [2024-11-06 14:14:12.488245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.025 [2024-11-06 14:14:12.488275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.025 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.025 [2024-11-06 14:14:12.497632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.025 [2024-11-06 14:14:12.497664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.025 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.025 [2024-11-06 14:14:12.507489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.025 [2024-11-06 14:14:12.507520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.025 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.025 [2024-11-06 14:14:12.517466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.025 [2024-11-06 14:14:12.517498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.025 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.025 [2024-11-06 14:14:12.527518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.025 [2024-11-06 14:14:12.527550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.025 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.025 [2024-11-06 14:14:12.536845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.025 [2024-11-06 14:14:12.536876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.025 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.025 [2024-11-06 14:14:12.546762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.025 [2024-11-06 14:14:12.546795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.025 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.025 [2024-11-06 14:14:12.556935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.025 [2024-11-06 14:14:12.556983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.025 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.025 [2024-11-06 14:14:12.566256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.025 [2024-11-06 14:14:12.566285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.025 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.025 [2024-11-06 14:14:12.573541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.025 [2024-11-06 14:14:12.573573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.025 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.025 [2024-11-06 14:14:12.584022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.025 [2024-11-06 14:14:12.584051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.025 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.025 [2024-11-06 14:14:12.593428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.025 [2024-11-06 14:14:12.593458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.025 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.025 [2024-11-06 14:14:12.600830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.025 [2024-11-06 14:14:12.600860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.025 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.025 [2024-11-06 14:14:12.611683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.025 [2024-11-06 14:14:12.611713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.025 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.025 [2024-11-06 14:14:12.627420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.025 [2024-11-06 14:14:12.627453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.025 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.025 [2024-11-06 14:14:12.643324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.025 [2024-11-06 14:14:12.643359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.025 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.025 [2024-11-06 14:14:12.653013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.025 [2024-11-06 14:14:12.653040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.025 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.283 [2024-11-06 14:14:12.663305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.283 [2024-11-06 14:14:12.663335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.283 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.283 [2024-11-06 14:14:12.672771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.283 [2024-11-06 14:14:12.672801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.283 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.283 15192.00 IOPS, 118.69 MiB/s [2024-11-06T14:14:12.918Z] [2024-11-06 14:14:12.681958] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.283 [2024-11-06 14:14:12.681988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.283 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.283 [2024-11-06 14:14:12.689297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.283 [2024-11-06 14:14:12.689326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.283 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.283 [2024-11-06 14:14:12.699833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.283 [2024-11-06 14:14:12.699862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.283 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.283 [2024-11-06 14:14:12.709791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.283 [2024-11-06 14:14:12.709820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.283 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.283 [2024-11-06 14:14:12.719396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.283 [2024-11-06 14:14:12.719425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.283 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.283 [2024-11-06 14:14:12.728700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.283 [2024-11-06 14:14:12.728729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.283 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.283 [2024-11-06 14:14:12.737969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.283 [2024-11-06 14:14:12.737996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.283 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.283 [2024-11-06 14:14:12.747291] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.283 [2024-11-06 14:14:12.747319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.283 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.283 [2024-11-06 14:14:12.756596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.283 [2024-11-06 14:14:12.756624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.283 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.283 [2024-11-06 14:14:12.765873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.284 [2024-11-06 14:14:12.765900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.284 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.284 [2024-11-06 14:14:12.775920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.284 [2024-11-06 14:14:12.775946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.284 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.284 [2024-11-06 14:14:12.786005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.284 [2024-11-06 14:14:12.786034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.284 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.284 [2024-11-06 14:14:12.795158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.284 [2024-11-06 14:14:12.795186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.284 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.284 [2024-11-06 14:14:12.804422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.284 [2024-11-06 14:14:12.804450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.284 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.284 [2024-11-06 14:14:12.814449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.284 [2024-11-06 14:14:12.814476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.284 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.284 [2024-11-06 14:14:12.823685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.284 [2024-11-06 14:14:12.823715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.284 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.284 [2024-11-06 14:14:12.833057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.284 [2024-11-06 14:14:12.833086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.284 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.284 [2024-11-06 14:14:12.843135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.284 [2024-11-06 14:14:12.843164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.284 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.284 [2024-11-06 14:14:12.853167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.284 [2024-11-06 14:14:12.853197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.284 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.284 [2024-11-06 14:14:12.863165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.284 [2024-11-06 14:14:12.863197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.284 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.284 [2024-11-06 14:14:12.873066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.284 [2024-11-06 14:14:12.873096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.284 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.284 [2024-11-06 14:14:12.882764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.284 [2024-11-06 14:14:12.882792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.284 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.284 [2024-11-06 14:14:12.892766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.284 [2024-11-06 14:14:12.892794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.284 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.284 [2024-11-06 14:14:12.902813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.284 [2024-11-06 14:14:12.902842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.284 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.284 [2024-11-06 14:14:12.911942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.284 [2024-11-06 14:14:12.911974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.284 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.543 [2024-11-06 14:14:12.921178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.543 [2024-11-06 14:14:12.921205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.543 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.543 [2024-11-06 14:14:12.931221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.543 [2024-11-06 14:14:12.931249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.543 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.543 [2024-11-06 14:14:12.941193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.543 [2024-11-06 14:14:12.941220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.543 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.543 [2024-11-06 14:14:12.950612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.543 [2024-11-06 14:14:12.950640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.543 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.543 [2024-11-06 14:14:12.959908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.543 [2024-11-06 14:14:12.959936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.543 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.543 [2024-11-06 14:14:12.975421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.543 [2024-11-06 14:14:12.975449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.543 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.543 [2024-11-06 14:14:12.991819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.543 [2024-11-06 14:14:12.991860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.543 2024/11/06 14:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.543 [2024-11-06 14:14:13.008299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.543 [2024-11-06 14:14:13.008338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.543 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.543 [2024-11-06 14:14:13.020148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.543 [2024-11-06 14:14:13.020184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.543 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.543 [2024-11-06 14:14:13.029698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.543 [2024-11-06 14:14:13.029733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.543 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.543 [2024-11-06 14:14:13.039522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.543 [2024-11-06 14:14:13.039556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.543 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.543 [2024-11-06 14:14:13.049452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.543 [2024-11-06 14:14:13.049484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.543 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.543 [2024-11-06 14:14:13.059471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.543 [2024-11-06 14:14:13.059502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.543 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.543 [2024-11-06 14:14:13.068866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.543 [2024-11-06 14:14:13.068899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.544 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.544 [2024-11-06 14:14:13.076135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.544 [2024-11-06 14:14:13.076163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.544 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.544 [2024-11-06 14:14:13.086771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.544 [2024-11-06 14:14:13.086804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.544 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.544 [2024-11-06 14:14:13.095046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.544 [2024-11-06 14:14:13.095073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.544 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.544 [2024-11-06 14:14:13.104708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.544 [2024-11-06 14:14:13.104740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.544 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.544 [2024-11-06 14:14:13.112396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.544 [2024-11-06 14:14:13.112425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.544 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.544 [2024-11-06 14:14:13.123723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.544 [2024-11-06 14:14:13.123752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.544 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.544 [2024-11-06 14:14:13.133737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.544 [2024-11-06 14:14:13.133768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.544 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.544 [2024-11-06 14:14:13.142015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.544 [2024-11-06 14:14:13.142043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.544 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.544 [2024-11-06 14:14:13.152590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.544 [2024-11-06 14:14:13.152618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.544 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.544 [2024-11-06 14:14:13.162729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.544 [2024-11-06 14:14:13.162758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.544 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.544 [2024-11-06 14:14:13.173036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.544 [2024-11-06 14:14:13.173064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.544 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.802 [2024-11-06 14:14:13.183095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.802 [2024-11-06 14:14:13.183121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.802 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.802 [2024-11-06 14:14:13.193106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.802 [2024-11-06 14:14:13.193132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.802 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.802 [2024-11-06 14:14:13.203068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.802 [2024-11-06 14:14:13.203101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.802 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.802 [2024-11-06 14:14:13.212643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.802 [2024-11-06 14:14:13.212671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.802 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.802 [2024-11-06 14:14:13.222505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.802 [2024-11-06 14:14:13.222533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.802 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.802 [2024-11-06 14:14:13.231571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.802 [2024-11-06 14:14:13.231599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.802 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.802 [2024-11-06 14:14:13.241156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.802 [2024-11-06 14:14:13.241183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.802 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.802 [2024-11-06 14:14:13.256422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.802 [2024-11-06 14:14:13.256448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.802 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.802 [2024-11-06 14:14:13.264559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.802 [2024-11-06 14:14:13.264586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.802 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.802 [2024-11-06 14:14:13.274173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.802 [2024-11-06 14:14:13.274199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.803 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.803 [2024-11-06 14:14:13.283602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.803 [2024-11-06 14:14:13.283631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.803 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.803 [2024-11-06 14:14:13.292757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.803 [2024-11-06 14:14:13.292788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.803 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.803 [2024-11-06 14:14:13.302553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.803 [2024-11-06 14:14:13.302582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.803 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.803 [2024-11-06 14:14:13.312449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.803 [2024-11-06 14:14:13.312476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.803 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.803 [2024-11-06 14:14:13.321633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.803 [2024-11-06 14:14:13.321661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.803 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.803 [2024-11-06 14:14:13.330873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.803 [2024-11-06 14:14:13.330904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.803 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.803 [2024-11-06 14:14:13.338319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.803 [2024-11-06 14:14:13.338348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.803 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.803 [2024-11-06 14:14:13.349990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.803 [2024-11-06 14:14:13.350021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.803 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.803 [2024-11-06 14:14:13.365343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.803 [2024-11-06 14:14:13.365376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.803 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.803 [2024-11-06 14:14:13.376386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.803 [2024-11-06 14:14:13.376418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.803 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.803 [2024-11-06 14:14:13.386756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.803 [2024-11-06 14:14:13.386786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.803 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.803 [2024-11-06 14:14:13.396904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.803 [2024-11-06 14:14:13.396933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.803 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.803 [2024-11-06 14:14:13.406923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.803 [2024-11-06 14:14:13.406959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.803 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.803 [2024-11-06 14:14:13.417205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.803 [2024-11-06 14:14:13.417232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.803 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:45.803 [2024-11-06 14:14:13.426506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.803 [2024-11-06 14:14:13.426533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.803 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.061 [2024-11-06 14:14:13.435783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.061 [2024-11-06 14:14:13.435817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.061 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.061 [2024-11-06 14:14:13.445742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.061 [2024-11-06 14:14:13.445769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.061 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.061 [2024-11-06 14:14:13.454999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.061 [2024-11-06 14:14:13.455025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.061 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.061 [2024-11-06 14:14:13.470601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.061 [2024-11-06 14:14:13.470631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.061 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.061 [2024-11-06 14:14:13.481806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.061 [2024-11-06 14:14:13.481835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.061 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.061 [2024-11-06 14:14:13.491927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.061 [2024-11-06 14:14:13.491969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.061 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.061 [2024-11-06 14:14:13.501452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.061 [2024-11-06 14:14:13.501484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.061 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.061 [2024-11-06 14:14:13.511469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.061 [2024-11-06 14:14:13.511503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.061 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.061 [2024-11-06 14:14:13.521527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.061 [2024-11-06 14:14:13.521560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.061 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.061 [2024-11-06 14:14:13.530789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.061 [2024-11-06 14:14:13.530823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.061 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.061 [2024-11-06 14:14:13.540256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.061 [2024-11-06 14:14:13.540286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.061 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.061 [2024-11-06 14:14:13.549583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.061 [2024-11-06 14:14:13.549614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.061 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.061 [2024-11-06 14:14:13.558931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.061 [2024-11-06 14:14:13.558970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.061 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.061 [2024-11-06 14:14:13.569115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.061 [2024-11-06 14:14:13.569143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.061 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.061 [2024-11-06 14:14:13.578509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.061 [2024-11-06 14:14:13.578541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.061 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.061 [2024-11-06 14:14:13.588421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.061 [2024-11-06 14:14:13.588453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.061 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.061 [2024-11-06 14:14:13.597689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.061 [2024-11-06 14:14:13.597718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.061 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.061 [2024-11-06 14:14:13.606906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.061 [2024-11-06 14:14:13.606935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.061 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.061 [2024-11-06 14:14:13.616964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.061 [2024-11-06 14:14:13.616991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.061 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.061 [2024-11-06 14:14:13.624222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.061 [2024-11-06 14:14:13.624247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.061 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.061 [2024-11-06 14:14:13.635865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.061 [2024-11-06 14:14:13.635890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.061 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.061 [2024-11-06 14:14:13.645496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.061 [2024-11-06 14:14:13.645525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.062 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.062 [2024-11-06 14:14:13.655430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.062 [2024-11-06 14:14:13.655457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.062 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.062 [2024-11-06 14:14:13.665323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.062 [2024-11-06 14:14:13.665350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.062 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.062 [2024-11-06 14:14:13.675231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.062 [2024-11-06 14:14:13.675256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.062 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.062 15261.00 IOPS, 119.23 MiB/s [2024-11-06T14:14:13.697Z] [2024-11-06 14:14:13.684349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.062 [2024-11-06 14:14:13.684375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.062 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.062 [2024-11-06 14:14:13.694106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.062 [2024-11-06 14:14:13.694132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.320 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.320 [2024-11-06 14:14:13.703231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.320 [2024-11-06 14:14:13.703256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.320 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.320 [2024-11-06 14:14:13.712523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.320 [2024-11-06 14:14:13.712548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.320 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.320 [2024-11-06 14:14:13.722429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.320 [2024-11-06 14:14:13.722457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.320 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.320 [2024-11-06 14:14:13.732461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.320 [2024-11-06 14:14:13.732487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.320 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.320 [2024-11-06 14:14:13.741797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.320 [2024-11-06 14:14:13.741825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.320 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.320 [2024-11-06 14:14:13.751763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.320 [2024-11-06 14:14:13.751797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.320 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.320 [2024-11-06 14:14:13.760968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.320 [2024-11-06 14:14:13.760994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.320 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.320 [2024-11-06 14:14:13.770255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.320 [2024-11-06 14:14:13.770283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.320 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.320 [2024-11-06 14:14:13.780202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.320 [2024-11-06 14:14:13.780226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.320 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.320 [2024-11-06 14:14:13.790161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.320 [2024-11-06 14:14:13.790188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.320 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.320 [2024-11-06 14:14:13.799270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.320 [2024-11-06 14:14:13.799298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.320 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.320 [2024-11-06 14:14:13.809306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.320 [2024-11-06 14:14:13.809334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.320 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.320 [2024-11-06 14:14:13.819284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.320 [2024-11-06 14:14:13.819310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.320 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.320 [2024-11-06 14:14:13.828613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.320 [2024-11-06 14:14:13.828641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.320 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.320 [2024-11-06 14:14:13.837847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.320 [2024-11-06 14:14:13.837882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.320 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.320 [2024-11-06 14:14:13.847568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.320 [2024-11-06 14:14:13.847595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.320 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.320 [2024-11-06 14:14:13.856801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.321 [2024-11-06 14:14:13.856829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.321 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.321 [2024-11-06 14:14:13.866695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.321 [2024-11-06 14:14:13.866722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.321 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.321 [2024-11-06 14:14:13.876575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.321 [2024-11-06 14:14:13.876603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.321 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.321 [2024-11-06 14:14:13.886499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.321 [2024-11-06 14:14:13.886525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.321 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.321 [2024-11-06 14:14:13.896473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.321 [2024-11-06 14:14:13.896501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.321 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.321 [2024-11-06 14:14:13.906367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.321 [2024-11-06 14:14:13.906394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.321 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.321 [2024-11-06 14:14:13.916408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.321 [2024-11-06 14:14:13.916435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.321 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.321 [2024-11-06 14:14:13.925747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.321 [2024-11-06 14:14:13.925774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.321 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.321 [2024-11-06 14:14:13.935604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.321 [2024-11-06 14:14:13.935631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.321 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.321 [2024-11-06 14:14:13.945621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.321 [2024-11-06 14:14:13.945649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.321 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.584 [2024-11-06 14:14:13.955438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.584 [2024-11-06 14:14:13.955466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.584 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.584 [2024-11-06 14:14:13.964903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.584 [2024-11-06 14:14:13.964931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.584 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.584 [2024-11-06 14:14:13.974872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.584 [2024-11-06 14:14:13.974900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.584 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.584 [2024-11-06 14:14:13.984826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.584 [2024-11-06 14:14:13.984853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.584 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.584 [2024-11-06 14:14:13.994733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.584 [2024-11-06 14:14:13.994760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.584 2024/11/06 14:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.584 [2024-11-06 14:14:14.004544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.584 [2024-11-06 14:14:14.004572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.584 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.584 [2024-11-06 14:14:14.013698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.584 [2024-11-06 14:14:14.013726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.584 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.584 [2024-11-06 14:14:14.023630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.584 [2024-11-06 14:14:14.023659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.584 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.584 [2024-11-06 14:14:14.033805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.584 [2024-11-06 14:14:14.033833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.584 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.584 [2024-11-06 14:14:14.043680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.584 [2024-11-06 14:14:14.043709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.584 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.584 [2024-11-06 14:14:14.053590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.584 [2024-11-06 14:14:14.053618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.584 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.584 [2024-11-06 14:14:14.062831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.585 [2024-11-06 14:14:14.062858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.585 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.585 [2024-11-06 14:14:14.072743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.585 [2024-11-06 14:14:14.072771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.585 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.585 [2024-11-06 14:14:14.082881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.585 [2024-11-06 14:14:14.082909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.585 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.585 [2024-11-06 14:14:14.092812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.585 [2024-11-06 14:14:14.092840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.585 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.585 [2024-11-06 14:14:14.102831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.585 [2024-11-06 14:14:14.102860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.585 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.585 [2024-11-06 14:14:14.112833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.585 [2024-11-06 14:14:14.112861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.585 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.585 [2024-11-06 14:14:14.122518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.585 [2024-11-06 14:14:14.122546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.585 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.585 [2024-11-06 14:14:14.132655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.585 [2024-11-06 14:14:14.132684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.585 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.585 [2024-11-06 14:14:14.142857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.585 [2024-11-06 14:14:14.142885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.585 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.585 [2024-11-06 14:14:14.150114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.585 [2024-11-06 14:14:14.150140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.585 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.585 [2024-11-06 14:14:14.160818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.585 [2024-11-06 14:14:14.160845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.585 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.585 [2024-11-06 14:14:14.170850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.585 [2024-11-06 14:14:14.170878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.585 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.585 [2024-11-06 14:14:14.181034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.585 [2024-11-06 14:14:14.181061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.585 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.585 [2024-11-06 14:14:14.191280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.585 [2024-11-06 14:14:14.191309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.585 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.585 [2024-11-06 14:14:14.198586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.585 [2024-11-06 14:14:14.198615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.585 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.585 [2024-11-06 14:14:14.209339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.585 [2024-11-06 14:14:14.209367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.585 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.844 [2024-11-06 14:14:14.218845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.844 [2024-11-06 14:14:14.218874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.844 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.844 [2024-11-06 14:14:14.234309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.844 [2024-11-06 14:14:14.234340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.844 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.844 [2024-11-06 14:14:14.242707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.844 [2024-11-06 14:14:14.242737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.844 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.844 [2024-11-06 14:14:14.253517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.844 [2024-11-06 14:14:14.253548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.844 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.844 [2024-11-06 14:14:14.263049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.844 [2024-11-06 14:14:14.263079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.844 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.844 [2024-11-06 14:14:14.272379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.844 [2024-11-06 14:14:14.272408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.844 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.844 [2024-11-06 14:14:14.281797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.844 [2024-11-06 14:14:14.281827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.844 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.844 [2024-11-06 14:14:14.291662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.844 [2024-11-06 14:14:14.291691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.844 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.844 [2024-11-06 14:14:14.301030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.844 [2024-11-06 14:14:14.301057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.844 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.844 [2024-11-06 14:14:14.309289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.844 [2024-11-06 14:14:14.309318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.844 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.844 [2024-11-06 14:14:14.320242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.844 [2024-11-06 14:14:14.320269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.844 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.844 [2024-11-06 14:14:14.329833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.844 [2024-11-06 14:14:14.329862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.844 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.844 [2024-11-06 14:14:14.339292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.844 [2024-11-06 14:14:14.339319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.844 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.844 [2024-11-06 14:14:14.346994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.844 [2024-11-06 14:14:14.347020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.845 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.845 [2024-11-06 14:14:14.358120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.845 [2024-11-06 14:14:14.358147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.845 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.845 [2024-11-06 14:14:14.367966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.845 [2024-11-06 14:14:14.367993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.845 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.845 [2024-11-06 14:14:14.377344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.845 [2024-11-06 14:14:14.377372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.845 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.845 [2024-11-06 14:14:14.384837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.845 [2024-11-06 14:14:14.384864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.845 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.845 [2024-11-06 14:14:14.396139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.845 [2024-11-06 14:14:14.396165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.845 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.845 [2024-11-06 14:14:14.404528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.845 [2024-11-06 14:14:14.404555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.845 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.845 [2024-11-06 14:14:14.414554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.845 [2024-11-06 14:14:14.414584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.845 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.845 [2024-11-06 14:14:14.424265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.845 [2024-11-06 14:14:14.424293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.845 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.845 [2024-11-06 14:14:14.431801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.845 [2024-11-06 14:14:14.431828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.845 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.845 [2024-11-06 14:14:14.442549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.845 [2024-11-06 14:14:14.442583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.845 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.845 [2024-11-06 14:14:14.450896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.845 [2024-11-06 14:14:14.450929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.845 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.845 [2024-11-06 14:14:14.461140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.845 [2024-11-06 14:14:14.461178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.845 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:46.845 [2024-11-06 14:14:14.470654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.845 [2024-11-06 14:14:14.470685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.845 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.103 [2024-11-06 14:14:14.480122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.103 [2024-11-06 14:14:14.480151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.103 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.103 [2024-11-06 14:14:14.489575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.103 [2024-11-06 14:14:14.489603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.103 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.103 [2024-11-06 14:14:14.497113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.103 [2024-11-06 14:14:14.497141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.103 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.103 [2024-11-06 14:14:14.508179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.103 [2024-11-06 14:14:14.508213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.103 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.103 [2024-11-06 14:14:14.516836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.103 [2024-11-06 14:14:14.516866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.103 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.103 [2024-11-06 14:14:14.525357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.103 [2024-11-06 14:14:14.525387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.103 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.103 [2024-11-06 14:14:14.534019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.103 [2024-11-06 14:14:14.534049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.103 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.103 [2024-11-06 14:14:14.542639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.103 [2024-11-06 14:14:14.542668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.103 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.103 [2024-11-06 14:14:14.551467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.103 [2024-11-06 14:14:14.551495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.103 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.103 [2024-11-06 14:14:14.559928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.103 [2024-11-06 14:14:14.559966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.103 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.103 [2024-11-06 14:14:14.568462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.104 [2024-11-06 14:14:14.568491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.104 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.104 [2024-11-06 14:14:14.577117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.104 [2024-11-06 14:14:14.577144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.104 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.104 [2024-11-06 14:14:14.585614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.104 [2024-11-06 14:14:14.585642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.104 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.104 [2024-11-06 14:14:14.594069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.104 [2024-11-06 14:14:14.594096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.104 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.104 [2024-11-06 14:14:14.602617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.104 [2024-11-06 14:14:14.602645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.104 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.104 [2024-11-06 14:14:14.611529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.104 [2024-11-06 14:14:14.611558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.104 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.104 [2024-11-06 14:14:14.620302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.104 [2024-11-06 14:14:14.620331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.104 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.104 [2024-11-06 14:14:14.628824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.104 [2024-11-06 14:14:14.628852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.104 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.104 [2024-11-06 14:14:14.637396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.104 [2024-11-06 14:14:14.637423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.104 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.104 [2024-11-06 14:14:14.645961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.104 [2024-11-06 14:14:14.645985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.104 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.104 [2024-11-06 14:14:14.654286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.104 [2024-11-06 14:14:14.654313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.104 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.104 [2024-11-06 14:14:14.662740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.104 [2024-11-06 14:14:14.662768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.104 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.104 [2024-11-06 14:14:14.671322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.104 [2024-11-06 14:14:14.671348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.104 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.104 [2024-11-06 14:14:14.679748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.104 [2024-11-06 14:14:14.679778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.104 15241.67 IOPS, 119.08 MiB/s [2024-11-06T14:14:14.739Z] 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.104 [2024-11-06 14:14:14.688625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.104 [2024-11-06 14:14:14.688653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.104 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.104 [2024-11-06 14:14:14.698445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.104 [2024-11-06 14:14:14.698476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.104 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.104 [2024-11-06 14:14:14.708574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.104 [2024-11-06 14:14:14.708602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.104 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.104 [2024-11-06 14:14:14.718750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.104 [2024-11-06 14:14:14.718778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.104 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.104 [2024-11-06 14:14:14.726102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.104 [2024-11-06 14:14:14.726127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.104 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.363 [2024-11-06 14:14:14.737307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.363 [2024-11-06 14:14:14.737334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.363 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.363 [2024-11-06 14:14:14.746792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.363 [2024-11-06 14:14:14.746819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.363 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.363 [2024-11-06 14:14:14.756164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.363 [2024-11-06 14:14:14.756190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.363 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.363 [2024-11-06 14:14:14.765628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.363 [2024-11-06 14:14:14.765656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.363 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.363 [2024-11-06 14:14:14.775127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.363 [2024-11-06 14:14:14.775154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.363 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.363 [2024-11-06 14:14:14.784492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.363 [2024-11-06 14:14:14.784521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.363 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.363 [2024-11-06 14:14:14.793893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.363 [2024-11-06 14:14:14.793924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.363 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.363 [2024-11-06 14:14:14.804030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.363 [2024-11-06 14:14:14.804059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.363 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.363 [2024-11-06 14:14:14.811392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.363 [2024-11-06 14:14:14.811419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.363 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.363 [2024-11-06 14:14:14.822418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.363 [2024-11-06 14:14:14.822447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.363 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.363 [2024-11-06 14:14:14.832213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.363 [2024-11-06 14:14:14.832243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.363 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.363 [2024-11-06 14:14:14.841769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.363 [2024-11-06 14:14:14.841799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.363 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.363 [2024-11-06 14:14:14.849142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.363 [2024-11-06 14:14:14.849168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.363 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.363 [2024-11-06 14:14:14.860163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.363 [2024-11-06 14:14:14.860188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.363 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.363 [2024-11-06 14:14:14.869973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.363 [2024-11-06 14:14:14.870001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.363 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.363 [2024-11-06 14:14:14.877501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.363 [2024-11-06 14:14:14.877527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.363 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.363 [2024-11-06 14:14:14.888581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.363 [2024-11-06 14:14:14.888609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.363 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.363 [2024-11-06 14:14:14.898879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.363 [2024-11-06 14:14:14.898908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.363 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.363 [2024-11-06 14:14:14.908680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.363 [2024-11-06 14:14:14.908715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.363 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.363 [2024-11-06 14:14:14.918164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.363 [2024-11-06 14:14:14.918198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.364 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.364 [2024-11-06 14:14:14.927699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.364 [2024-11-06 14:14:14.927730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.364 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.364 [2024-11-06 14:14:14.937788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.364 [2024-11-06 14:14:14.937818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.364 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.364 [2024-11-06 14:14:14.947869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.364 [2024-11-06 14:14:14.947903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.364 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.364 [2024-11-06 14:14:14.958147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.364 [2024-11-06 14:14:14.958179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.364 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.364 [2024-11-06 14:14:14.965456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.364 [2024-11-06 14:14:14.965485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.364 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.364 [2024-11-06 14:14:14.976709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.364 [2024-11-06 14:14:14.976742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.364 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.364 [2024-11-06 14:14:14.986199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.364 [2024-11-06 14:14:14.986228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.364 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.364 [2024-11-06 14:14:14.993716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.364 [2024-11-06 14:14:14.993744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.622 2024/11/06 14:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.622 [2024-11-06 14:14:15.004654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.622 [2024-11-06 14:14:15.004684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.622 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.622 [2024-11-06 14:14:15.014568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.622 [2024-11-06 14:14:15.014599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.622 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.622 [2024-11-06 14:14:15.030937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.622 [2024-11-06 14:14:15.030982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.622 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.622 [2024-11-06 14:14:15.042028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.622 [2024-11-06 14:14:15.042055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.622 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.622 [2024-11-06 14:14:15.056964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.622 [2024-11-06 14:14:15.056995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.622 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.622 [2024-11-06 14:14:15.068871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.622 [2024-11-06 14:14:15.068910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.622 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.622 [2024-11-06 14:14:15.078537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.622 [2024-11-06 14:14:15.078566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.622 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.622 [2024-11-06 14:14:15.085943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.622 [2024-11-06 14:14:15.085981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.622 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.622 [2024-11-06 14:14:15.096805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.622 [2024-11-06 14:14:15.096832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.622 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.622 [2024-11-06 14:14:15.107091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.623 [2024-11-06 14:14:15.107117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.623 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.623 [2024-11-06 14:14:15.116700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.623 [2024-11-06 14:14:15.116727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.623 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.623 [2024-11-06 14:14:15.124271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.623 [2024-11-06 14:14:15.124300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.623 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.623 [2024-11-06 14:14:15.135442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.623 [2024-11-06 14:14:15.135471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.623 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.623 [2024-11-06 14:14:15.145164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.623 [2024-11-06 14:14:15.145191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.623 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.623 [2024-11-06 14:14:15.155208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.623 [2024-11-06 14:14:15.155235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.623 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.623 [2024-11-06 14:14:15.164525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.623 [2024-11-06 14:14:15.164553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.623 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.623 [2024-11-06 14:14:15.173967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.623 [2024-11-06 14:14:15.174007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.623 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.623 [2024-11-06 14:14:15.184242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.623 [2024-11-06 14:14:15.184268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.623 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.623 [2024-11-06 14:14:15.193532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.623 [2024-11-06 14:14:15.193559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.623 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.623 [2024-11-06 14:14:15.203587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.623 [2024-11-06 14:14:15.203615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.623 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.623 [2024-11-06 14:14:15.213025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.623 [2024-11-06 14:14:15.213052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.623 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.623 [2024-11-06 14:14:15.222500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.623 [2024-11-06 14:14:15.222534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.623 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.623 [2024-11-06 14:14:15.231816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.623 [2024-11-06 14:14:15.231844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.623 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.623 [2024-11-06 14:14:15.241264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.623 [2024-11-06 14:14:15.241293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.623 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.623 [2024-11-06 14:14:15.250703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.623 [2024-11-06 14:14:15.250730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.623 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.893 [2024-11-06 14:14:15.260080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.894 [2024-11-06 14:14:15.260107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.894 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.894 [2024-11-06 14:14:15.269735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.894 [2024-11-06 14:14:15.269770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.894 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.894 [2024-11-06 14:14:15.277127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.894 [2024-11-06 14:14:15.277155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.894 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.894 [2024-11-06 14:14:15.288081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.894 [2024-11-06 14:14:15.288111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.894 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.894 [2024-11-06 14:14:15.297673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.894 [2024-11-06 14:14:15.297702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.894 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.894 [2024-11-06 14:14:15.307076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.894 [2024-11-06 14:14:15.307105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.894 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.894 [2024-11-06 14:14:15.314419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.894 [2024-11-06 14:14:15.314449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.894 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.894 [2024-11-06 14:14:15.325598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.894 [2024-11-06 14:14:15.325628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.894 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.894 [2024-11-06 14:14:15.333816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.894 [2024-11-06 14:14:15.333844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.894 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.894 [2024-11-06 14:14:15.343872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.894 [2024-11-06 14:14:15.343898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.894 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.894 [2024-11-06 14:14:15.351405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.894 [2024-11-06 14:14:15.351433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.894 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.894 [2024-11-06 14:14:15.362253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.894 [2024-11-06 14:14:15.362281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.894 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.894 [2024-11-06 14:14:15.370829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.894 [2024-11-06 14:14:15.370857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.894 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.894 [2024-11-06 14:14:15.379328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.894 [2024-11-06 14:14:15.379358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.894 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.894 [2024-11-06 14:14:15.387643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.894 [2024-11-06 14:14:15.387671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.894 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.894 [2024-11-06 14:14:15.396243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.894 [2024-11-06 14:14:15.396269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.894 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.894 [2024-11-06 14:14:15.404687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.894 [2024-11-06 14:14:15.404715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.894 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.894 [2024-11-06 14:14:15.413089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.894 [2024-11-06 14:14:15.413116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.894 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.894 [2024-11-06 14:14:15.421516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.894 [2024-11-06 14:14:15.421544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.894 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.894 [2024-11-06 14:14:15.430035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.894 [2024-11-06 14:14:15.430063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.894 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.894 [2024-11-06 14:14:15.438695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.894 [2024-11-06 14:14:15.438724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.894 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.894 [2024-11-06 14:14:15.448579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.894 [2024-11-06 14:14:15.448609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.894 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.894 [2024-11-06 14:14:15.458152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.894 [2024-11-06 14:14:15.458181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.894 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.894 [2024-11-06 14:14:15.467444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.894 [2024-11-06 14:14:15.467475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.894 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.894 [2024-11-06 14:14:15.476967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.894 [2024-11-06 14:14:15.476994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.894 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.894 [2024-11-06 14:14:15.487082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.894 [2024-11-06 14:14:15.487110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.895 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.895 [2024-11-06 14:14:15.494658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.895 [2024-11-06 14:14:15.494687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.895 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.895 [2024-11-06 14:14:15.509764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.895 [2024-11-06 14:14:15.509796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.895 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:47.895 [2024-11-06 14:14:15.519525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.895 [2024-11-06 14:14:15.519558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.895 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.152 [2024-11-06 14:14:15.528854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.152 [2024-11-06 14:14:15.528885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.152 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.152 [2024-11-06 14:14:15.536493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.152 [2024-11-06 14:14:15.536522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.152 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.153 [2024-11-06 14:14:15.547148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.153 [2024-11-06 14:14:15.547176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.153 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.153 [2024-11-06 14:14:15.555427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.153 [2024-11-06 14:14:15.555454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.153 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.153 [2024-11-06 14:14:15.565647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.153 [2024-11-06 14:14:15.565679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.153 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.153 [2024-11-06 14:14:15.575143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.153 [2024-11-06 14:14:15.575171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.153 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.153 [2024-11-06 14:14:15.582532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.153 [2024-11-06 14:14:15.582559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.153 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.153 [2024-11-06 14:14:15.593924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.153 [2024-11-06 14:14:15.593964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.153 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.153 [2024-11-06 14:14:15.602512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.153 [2024-11-06 14:14:15.602541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.153 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.153 [2024-11-06 14:14:15.611131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.153 [2024-11-06 14:14:15.611162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.153 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.153 [2024-11-06 14:14:15.619737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.153 [2024-11-06 14:14:15.619768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.153 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.153 [2024-11-06 14:14:15.628364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.153 [2024-11-06 14:14:15.628390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.153 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.153 [2024-11-06 14:14:15.636985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.153 [2024-11-06 14:14:15.637013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.153 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.153 [2024-11-06 14:14:15.645460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.153 [2024-11-06 14:14:15.645490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.153 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.153 [2024-11-06 14:14:15.655294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.153 [2024-11-06 14:14:15.655326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.153 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.153 [2024-11-06 14:14:15.671487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.153 [2024-11-06 14:14:15.671526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.153 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.153 15154.25 IOPS, 118.39 MiB/s [2024-11-06T14:14:15.788Z] [2024-11-06 14:14:15.683094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.153 [2024-11-06 14:14:15.683125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.153 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.153 [2024-11-06 14:14:15.692727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.153 [2024-11-06 14:14:15.692759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.153 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.153 [2024-11-06 14:14:15.702387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.153 [2024-11-06 14:14:15.702420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.153 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.153 [2024-11-06 14:14:15.713185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.153 [2024-11-06 14:14:15.713215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.153 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.153 [2024-11-06 14:14:15.721175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.153 [2024-11-06 14:14:15.721201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.153 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.153 [2024-11-06 14:14:15.732530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.153 [2024-11-06 14:14:15.732558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.153 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.153 [2024-11-06 14:14:15.740843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.153 [2024-11-06 14:14:15.740872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.153 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.153 [2024-11-06 14:14:15.752691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.153 [2024-11-06 14:14:15.752721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.153 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.153 [2024-11-06 14:14:15.762517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.153 [2024-11-06 14:14:15.762545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.153 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.153 [2024-11-06 14:14:15.771940] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.153 [2024-11-06 14:14:15.771977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.153 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.153 [2024-11-06 14:14:15.781913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.153 [2024-11-06 14:14:15.781942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.153 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.412 [2024-11-06 14:14:15.789418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.412 [2024-11-06 14:14:15.789447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.412 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.412 [2024-11-06 14:14:15.800232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.412 [2024-11-06 14:14:15.800258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.413 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.413 [2024-11-06 14:14:15.808669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.413 [2024-11-06 14:14:15.808694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.413 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.413 [2024-11-06 14:14:15.817293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.413 [2024-11-06 14:14:15.817318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.413 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.413 [2024-11-06 14:14:15.827091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.413 [2024-11-06 14:14:15.827116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.413 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.413 [2024-11-06 14:14:15.837251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.413 [2024-11-06 14:14:15.837280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.413 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.413 [2024-11-06 14:14:15.845290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.413 [2024-11-06 14:14:15.845321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.413 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.413 [2024-11-06 14:14:15.856831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.413 [2024-11-06 14:14:15.856862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.413 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.413 [2024-11-06 14:14:15.865009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.413 [2024-11-06 14:14:15.865036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.413 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.413 [2024-11-06 14:14:15.875373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.413 [2024-11-06 14:14:15.875403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.413 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.413 [2024-11-06 14:14:15.885079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.413 [2024-11-06 14:14:15.885106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.413 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.413 [2024-11-06 14:14:15.894256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.413 [2024-11-06 14:14:15.894284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.413 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.413 [2024-11-06 14:14:15.903787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.413 [2024-11-06 14:14:15.903829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.413 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.413 [2024-11-06 14:14:15.913969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.413 [2024-11-06 14:14:15.913996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.413 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.413 [2024-11-06 14:14:15.923364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.413 [2024-11-06 14:14:15.923392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.413 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.413 [2024-11-06 14:14:15.932810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.413 [2024-11-06 14:14:15.932839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.413 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.413 [2024-11-06 14:14:15.940295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.413 [2024-11-06 14:14:15.940322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.413 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.413 [2024-11-06 14:14:15.951276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.413 [2024-11-06 14:14:15.951304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.413 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.413 [2024-11-06 14:14:15.961489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.413 [2024-11-06 14:14:15.961518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.413 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.413 [2024-11-06 14:14:15.970933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.413 [2024-11-06 14:14:15.970971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.413 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.413 [2024-11-06 14:14:15.978360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.413 [2024-11-06 14:14:15.978387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.413 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.413 [2024-11-06 14:14:15.989384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.413 [2024-11-06 14:14:15.989417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.413 2024/11/06 14:14:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.413 [2024-11-06 14:14:15.999283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.413 [2024-11-06 14:14:15.999318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.413 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.413 [2024-11-06 14:14:16.006851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.413 [2024-11-06 14:14:16.006883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.413 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.413 [2024-11-06 14:14:16.017942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.413 [2024-11-06 14:14:16.017985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.413 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.413 [2024-11-06 14:14:16.026274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.413 [2024-11-06 14:14:16.026302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.413 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.413 [2024-11-06 14:14:16.036660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.413 [2024-11-06 14:14:16.036689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.413 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.672 [2024-11-06 14:14:16.046283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.672 [2024-11-06 14:14:16.046310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.673 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.673 [2024-11-06 14:14:16.056610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.673 [2024-11-06 14:14:16.056639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.673 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.673 [2024-11-06 14:14:16.066708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.673 [2024-11-06 14:14:16.066736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.673 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.673 [2024-11-06 14:14:16.076175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.673 [2024-11-06 14:14:16.076205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.673 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.673 [2024-11-06 14:14:16.083568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.673 [2024-11-06 14:14:16.083599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.673 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.673 [2024-11-06 14:14:16.094712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.673 [2024-11-06 14:14:16.094743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.673 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.673 [2024-11-06 14:14:16.104475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.673 [2024-11-06 14:14:16.104505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.673 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.673 [2024-11-06 14:14:16.111855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.673 [2024-11-06 14:14:16.111882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.673 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.673 [2024-11-06 14:14:16.122973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.673 [2024-11-06 14:14:16.123003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.673 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.673 [2024-11-06 14:14:16.131202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.673 [2024-11-06 14:14:16.131230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.673 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.673 [2024-11-06 14:14:16.141667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.673 [2024-11-06 14:14:16.141697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.673 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.673 [2024-11-06 14:14:16.149780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.673 [2024-11-06 14:14:16.149810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.673 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.673 [2024-11-06 14:14:16.161320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.673 [2024-11-06 14:14:16.161350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.673 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.673 [2024-11-06 14:14:16.172566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.673 [2024-11-06 14:14:16.172595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.673 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.673 [2024-11-06 14:14:16.181958] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.673 [2024-11-06 14:14:16.181987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.673 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.673 [2024-11-06 14:14:16.191334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.673 [2024-11-06 14:14:16.191363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.673 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.673 [2024-11-06 14:14:16.200734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.673 [2024-11-06 14:14:16.200763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.673 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.673 [2024-11-06 14:14:16.210188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.673 [2024-11-06 14:14:16.210217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.673 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.673 [2024-11-06 14:14:16.217588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.673 [2024-11-06 14:14:16.217615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.673 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.673 [2024-11-06 14:14:16.228642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.673 [2024-11-06 14:14:16.228671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.673 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.673 [2024-11-06 14:14:16.238326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.673 [2024-11-06 14:14:16.238353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.673 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.673 [2024-11-06 14:14:16.247676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.673 [2024-11-06 14:14:16.247705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.673 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.673 [2024-11-06 14:14:16.257107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.673 [2024-11-06 14:14:16.257136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.673 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.673 [2024-11-06 14:14:16.266604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.673 [2024-11-06 14:14:16.266633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.673 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.673 [2024-11-06 14:14:16.274040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.673 [2024-11-06 14:14:16.274067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.673 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.673 [2024-11-06 14:14:16.285111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.673 [2024-11-06 14:14:16.285145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.674 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.674 [2024-11-06 14:14:16.295087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.674 [2024-11-06 14:14:16.295118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.674 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.674 [2024-11-06 14:14:16.304582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.674 [2024-11-06 14:14:16.304612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.932 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.932 [2024-11-06 14:14:16.311963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.932 [2024-11-06 14:14:16.311992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.932 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.932 [2024-11-06 14:14:16.323106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.932 [2024-11-06 14:14:16.323140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.932 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.932 [2024-11-06 14:14:16.330905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.932 [2024-11-06 14:14:16.330937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.932 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.932 [2024-11-06 14:14:16.341816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.932 [2024-11-06 14:14:16.341852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.932 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.932 [2024-11-06 14:14:16.350262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.932 [2024-11-06 14:14:16.350293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.932 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.932 [2024-11-06 14:14:16.361833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.932 [2024-11-06 14:14:16.361865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.932 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.932 [2024-11-06 14:14:16.371783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.932 [2024-11-06 14:14:16.371831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.932 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.932 [2024-11-06 14:14:16.381966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.932 [2024-11-06 14:14:16.382002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.932 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.932 [2024-11-06 14:14:16.391527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.932 [2024-11-06 14:14:16.391568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.932 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.932 [2024-11-06 14:14:16.401013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.932 [2024-11-06 14:14:16.401046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.932 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.932 [2024-11-06 14:14:16.410815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.932 [2024-11-06 14:14:16.410845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.932 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.932 [2024-11-06 14:14:16.420208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.932 [2024-11-06 14:14:16.420238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.932 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.932 [2024-11-06 14:14:16.429514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.932 [2024-11-06 14:14:16.429543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.932 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.932 [2024-11-06 14:14:16.438982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.932 [2024-11-06 14:14:16.439011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.932 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.932 [2024-11-06 14:14:16.446346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.932 [2024-11-06 14:14:16.446374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.932 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.933 [2024-11-06 14:14:16.457295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.933 [2024-11-06 14:14:16.457323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.933 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.933 [2024-11-06 14:14:16.466983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.933 [2024-11-06 14:14:16.467009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.933 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.933 [2024-11-06 14:14:16.477076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.933 [2024-11-06 14:14:16.477103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.933 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.933 [2024-11-06 14:14:16.486293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.933 [2024-11-06 14:14:16.486320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.933 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.933 [2024-11-06 14:14:16.493678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.933 [2024-11-06 14:14:16.493707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.933 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.933 [2024-11-06 14:14:16.504884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.933 [2024-11-06 14:14:16.504911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.933 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.933 [2024-11-06 14:14:16.514638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.933 [2024-11-06 14:14:16.514667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.933 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.933 [2024-11-06 14:14:16.524825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.933 [2024-11-06 14:14:16.524853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.933 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.933 [2024-11-06 14:14:16.532149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.933 [2024-11-06 14:14:16.532174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.933 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.933 [2024-11-06 14:14:16.543023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.933 [2024-11-06 14:14:16.543051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.933 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.933 [2024-11-06 14:14:16.553570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.933 [2024-11-06 14:14:16.553599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.933 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.933 [2024-11-06 14:14:16.561608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.933 [2024-11-06 14:14:16.561636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.933 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.191 [2024-11-06 14:14:16.572714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.191 [2024-11-06 14:14:16.572742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.191 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.191 [2024-11-06 14:14:16.580894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.191 [2024-11-06 14:14:16.580923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.191 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.191 [2024-11-06 14:14:16.591015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.191 [2024-11-06 14:14:16.591044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.191 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.191 [2024-11-06 14:14:16.600561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.191 [2024-11-06 14:14:16.600591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.191 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.191 [2024-11-06 14:14:16.609882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.191 [2024-11-06 14:14:16.609912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.191 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.191 [2024-11-06 14:14:16.619457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.191 [2024-11-06 14:14:16.619488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.191 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.191 [2024-11-06 14:14:16.626870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.191 [2024-11-06 14:14:16.626898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.191 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.191 [2024-11-06 14:14:16.638190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.191 [2024-11-06 14:14:16.638219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.191 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.191 [2024-11-06 14:14:16.647899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.191 [2024-11-06 14:14:16.647926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.191 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.191 [2024-11-06 14:14:16.657854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.191 [2024-11-06 14:14:16.657883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.191 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.191 [2024-11-06 14:14:16.667402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.191 [2024-11-06 14:14:16.667429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.191 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.191 [2024-11-06 14:14:16.676785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.191 [2024-11-06 14:14:16.676814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.191 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.191 15086.00 IOPS, 117.86 MiB/s 00:09:49.191 Latency(us) 00:09:49.191 [2024-11-06T14:14:16.826Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:49.191 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:49.191 Nvme1n1 : 5.01 15096.07 117.94 0.00 0.00 8472.85 3654.89 16434.41 00:09:49.191 [2024-11-06T14:14:16.826Z] =================================================================================================================== 00:09:49.191 [2024-11-06T14:14:16.826Z] Total : 15096.07 117.94 0.00 0.00 8472.85 3654.89 16434.41 00:09:49.191 [2024-11-06 14:14:16.687073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.191 [2024-11-06 14:14:16.687104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.192 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.192 [2024-11-06 14:14:16.691710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.192 [2024-11-06 14:14:16.691736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.192 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.192 [2024-11-06 14:14:16.699694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.192 [2024-11-06 14:14:16.699720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.192 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.192 [2024-11-06 14:14:16.707729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.192 [2024-11-06 14:14:16.707761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.192 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.192 [2024-11-06 14:14:16.715722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.192 [2024-11-06 14:14:16.715752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.192 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.192 [2024-11-06 14:14:16.723697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.192 [2024-11-06 14:14:16.723722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.192 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.192 [2024-11-06 14:14:16.731704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.192 [2024-11-06 14:14:16.731729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.192 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.192 [2024-11-06 14:14:16.739706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.192 [2024-11-06 14:14:16.739731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.192 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.192 [2024-11-06 14:14:16.747695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.192 [2024-11-06 14:14:16.747718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.192 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.192 [2024-11-06 14:14:16.755704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.192 [2024-11-06 14:14:16.755729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.192 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.192 [2024-11-06 14:14:16.763700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.192 [2024-11-06 14:14:16.763723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.192 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.192 [2024-11-06 14:14:16.771733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.192 [2024-11-06 14:14:16.771758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.192 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.192 [2024-11-06 14:14:16.779713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.192 [2024-11-06 14:14:16.779737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.192 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.192 [2024-11-06 14:14:16.787702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.192 [2024-11-06 14:14:16.787725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.192 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.192 [2024-11-06 14:14:16.795718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.192 [2024-11-06 14:14:16.795743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.192 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.192 [2024-11-06 14:14:16.803721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.192 [2024-11-06 14:14:16.803746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.192 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.192 [2024-11-06 14:14:16.811715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.192 [2024-11-06 14:14:16.811741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.192 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.192 [2024-11-06 14:14:16.819735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.192 [2024-11-06 14:14:16.819763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.192 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.451 [2024-11-06 14:14:16.827723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.451 [2024-11-06 14:14:16.827751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.451 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.451 [2024-11-06 14:14:16.835748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.451 [2024-11-06 14:14:16.835774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.451 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.451 [2024-11-06 14:14:16.843735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.451 [2024-11-06 14:14:16.843759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.451 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.451 [2024-11-06 14:14:16.851727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.451 [2024-11-06 14:14:16.851750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.451 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.451 [2024-11-06 14:14:16.859746] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.451 [2024-11-06 14:14:16.859774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.451 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.451 [2024-11-06 14:14:16.867747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.451 [2024-11-06 14:14:16.867769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.451 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.451 [2024-11-06 14:14:16.875728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.451 [2024-11-06 14:14:16.875751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.451 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.451 [2024-11-06 14:14:16.883746] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.451 [2024-11-06 14:14:16.883769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.451 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.451 [2024-11-06 14:14:16.891742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.451 [2024-11-06 14:14:16.891768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.451 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.451 [2024-11-06 14:14:16.899763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.451 [2024-11-06 14:14:16.899804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.451 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.451 [2024-11-06 14:14:16.907766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.451 [2024-11-06 14:14:16.907798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.451 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.451 [2024-11-06 14:14:16.915751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.451 [2024-11-06 14:14:16.915777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.451 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.451 [2024-11-06 14:14:16.923757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.451 [2024-11-06 14:14:16.923781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.451 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.451 [2024-11-06 14:14:16.931752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.451 [2024-11-06 14:14:16.931776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.451 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.451 [2024-11-06 14:14:16.939752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.451 [2024-11-06 14:14:16.939780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.451 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.451 [2024-11-06 14:14:16.947787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.451 [2024-11-06 14:14:16.947830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.451 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.451 [2024-11-06 14:14:16.955764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.451 [2024-11-06 14:14:16.955801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.451 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.451 [2024-11-06 14:14:16.963788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.451 [2024-11-06 14:14:16.963821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.451 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.451 [2024-11-06 14:14:16.971766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.451 [2024-11-06 14:14:16.971790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.451 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.451 [2024-11-06 14:14:16.979756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.452 [2024-11-06 14:14:16.979779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.452 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.452 [2024-11-06 14:14:16.987766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.452 [2024-11-06 14:14:16.987790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.452 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.452 [2024-11-06 14:14:16.995772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.452 [2024-11-06 14:14:16.995811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.452 2024/11/06 14:14:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.452 [2024-11-06 14:14:17.003765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.452 [2024-11-06 14:14:17.003789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.452 2024/11/06 14:14:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.452 [2024-11-06 14:14:17.011775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.452 [2024-11-06 14:14:17.011804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.452 2024/11/06 14:14:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.452 [2024-11-06 14:14:17.019765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.452 [2024-11-06 14:14:17.019788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.452 2024/11/06 14:14:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.452 [2024-11-06 14:14:17.027782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.452 [2024-11-06 14:14:17.027812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.452 2024/11/06 14:14:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.452 [2024-11-06 14:14:17.035784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.452 [2024-11-06 14:14:17.035813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.452 2024/11/06 14:14:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.452 [2024-11-06 14:14:17.043773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.452 [2024-11-06 14:14:17.043809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.452 2024/11/06 14:14:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.452 [2024-11-06 14:14:17.051821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.452 [2024-11-06 14:14:17.051846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.452 2024/11/06 14:14:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.452 [2024-11-06 14:14:17.059828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.452 [2024-11-06 14:14:17.059855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.452 2024/11/06 14:14:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.452 [2024-11-06 14:14:17.067789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.452 [2024-11-06 14:14:17.067828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.452 2024/11/06 14:14:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.452 [2024-11-06 14:14:17.075808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.452 [2024-11-06 14:14:17.075833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.452 2024/11/06 14:14:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.452 [2024-11-06 14:14:17.083784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.452 [2024-11-06 14:14:17.083813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.711 2024/11/06 14:14:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.711 [2024-11-06 14:14:17.091816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.711 [2024-11-06 14:14:17.091839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.711 2024/11/06 14:14:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.711 [2024-11-06 14:14:17.099804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.711 [2024-11-06 14:14:17.099827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.711 2024/11/06 14:14:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.711 [2024-11-06 14:14:17.107800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.711 [2024-11-06 14:14:17.107824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.711 2024/11/06 14:14:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.711 [2024-11-06 14:14:17.115819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.711 [2024-11-06 14:14:17.115842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.711 2024/11/06 14:14:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.711 [2024-11-06 14:14:17.123815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.711 [2024-11-06 14:14:17.123837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.711 2024/11/06 14:14:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.711 [2024-11-06 14:14:17.131807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.711 [2024-11-06 14:14:17.131830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.711 2024/11/06 14:14:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.711 [2024-11-06 14:14:17.139830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.711 [2024-11-06 14:14:17.139853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.711 2024/11/06 14:14:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.711 [2024-11-06 14:14:17.147810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.711 [2024-11-06 14:14:17.147833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.711 2024/11/06 14:14:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.711 [2024-11-06 14:14:17.155839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.711 [2024-11-06 14:14:17.155862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.711 2024/11/06 14:14:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.711 [2024-11-06 14:14:17.163836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.711 [2024-11-06 14:14:17.163860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.711 2024/11/06 14:14:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.711 [2024-11-06 14:14:17.171831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.711 [2024-11-06 14:14:17.171858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.711 2024/11/06 14:14:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.711 [2024-11-06 14:14:17.179844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.711 [2024-11-06 14:14:17.179870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.711 2024/11/06 14:14:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.711 [2024-11-06 14:14:17.187848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.711 [2024-11-06 14:14:17.187872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.711 2024/11/06 14:14:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.711 [2024-11-06 14:14:17.195828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.711 [2024-11-06 14:14:17.195852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.712 2024/11/06 14:14:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.712 [2024-11-06 14:14:17.203852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.712 [2024-11-06 14:14:17.203875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.712 2024/11/06 14:14:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.712 [2024-11-06 14:14:17.211857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.712 [2024-11-06 14:14:17.211888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.712 2024/11/06 14:14:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.712 [2024-11-06 14:14:17.219874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.712 [2024-11-06 14:14:17.219902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.712 2024/11/06 14:14:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.712 [2024-11-06 14:14:17.227861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.712 [2024-11-06 14:14:17.227886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.712 2024/11/06 14:14:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.712 [2024-11-06 14:14:17.235846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.712 [2024-11-06 14:14:17.235870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.712 2024/11/06 14:14:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.712 [2024-11-06 14:14:17.243851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.712 [2024-11-06 14:14:17.243873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.712 2024/11/06 14:14:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.712 [2024-11-06 14:14:17.251863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.712 [2024-11-06 14:14:17.251885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.712 2024/11/06 14:14:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.712 [2024-11-06 14:14:17.259851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.712 [2024-11-06 14:14:17.259874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.712 2024/11/06 14:14:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.712 [2024-11-06 14:14:17.267859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.712 [2024-11-06 14:14:17.267882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.712 2024/11/06 14:14:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.712 [2024-11-06 14:14:17.275857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.712 [2024-11-06 14:14:17.275881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.712 2024/11/06 14:14:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.712 [2024-11-06 14:14:17.283883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.712 [2024-11-06 14:14:17.283907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.712 2024/11/06 14:14:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.712 [2024-11-06 14:14:17.291874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.712 [2024-11-06 14:14:17.291896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.712 2024/11/06 14:14:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.712 [2024-11-06 14:14:17.299862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.712 [2024-11-06 14:14:17.299885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.712 2024/11/06 14:14:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.712 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (69902) - No such process 00:09:49.712 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 69902 00:09:49.712 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:49.712 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.712 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:49.712 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.712 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:49.712 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.712 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:49.712 delay0 00:09:49.712 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.712 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:49.712 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.712 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:49.712 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.712 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:09:49.970 [2024-11-06 14:14:17.523297] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:56.524 Initializing NVMe Controllers 00:09:56.524 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:09:56.524 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:56.524 Initialization complete. Launching workers. 00:09:56.524 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 79 00:09:56.524 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 366, failed to submit 33 00:09:56.524 success 178, unsuccessful 188, failed 0 00:09:56.524 14:14:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:56.524 14:14:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:56.524 14:14:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:56.524 14:14:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:56.524 14:14:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:56.524 14:14:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:56.524 14:14:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:56.524 14:14:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:56.524 rmmod nvme_tcp 00:09:56.524 rmmod nvme_fabrics 00:09:56.524 rmmod nvme_keyring 00:09:56.524 14:14:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:56.524 14:14:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:56.524 14:14:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:56.524 14:14:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 69721 ']' 00:09:56.524 14:14:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 69721 00:09:56.524 14:14:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 69721 ']' 00:09:56.524 14:14:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 69721 00:09:56.524 14:14:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:09:56.524 14:14:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:56.524 14:14:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69721 00:09:56.524 14:14:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:09:56.524 14:14:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:09:56.524 killing process with pid 69721 00:09:56.524 14:14:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69721' 00:09:56.524 14:14:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 69721 00:09:56.524 14:14:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 69721 00:09:56.782 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:56.782 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:56.782 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:56.782 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:56.782 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:09:56.782 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:56.782 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:09:56.782 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:56.782 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:56.782 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:56.782 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:56.782 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:56.782 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:57.041 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:57.041 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:57.041 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:57.041 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:57.041 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:57.041 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:57.041 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:57.041 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:57.041 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:57.041 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:57.041 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.041 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:57.041 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:57.041 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:09:57.041 00:09:57.041 real 0m26.280s 00:09:57.041 user 0m44.328s 00:09:57.041 sys 0m5.156s 00:09:57.041 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:57.041 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:57.041 ************************************ 00:09:57.041 END TEST nvmf_zcopy 00:09:57.041 ************************************ 00:09:57.041 14:14:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:57.041 14:14:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:57.041 14:14:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:57.041 14:14:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:57.041 ************************************ 00:09:57.041 START TEST nvmf_nmic 00:09:57.041 ************************************ 00:09:57.041 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:57.299 * Looking for test storage... 00:09:57.299 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:57.299 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:57.299 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:09:57.299 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:57.299 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:57.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.300 --rc genhtml_branch_coverage=1 00:09:57.300 --rc genhtml_function_coverage=1 00:09:57.300 --rc genhtml_legend=1 00:09:57.300 --rc geninfo_all_blocks=1 00:09:57.300 --rc geninfo_unexecuted_blocks=1 00:09:57.300 00:09:57.300 ' 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:57.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.300 --rc genhtml_branch_coverage=1 00:09:57.300 --rc genhtml_function_coverage=1 00:09:57.300 --rc genhtml_legend=1 00:09:57.300 --rc geninfo_all_blocks=1 00:09:57.300 --rc geninfo_unexecuted_blocks=1 00:09:57.300 00:09:57.300 ' 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:57.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.300 --rc genhtml_branch_coverage=1 00:09:57.300 --rc genhtml_function_coverage=1 00:09:57.300 --rc genhtml_legend=1 00:09:57.300 --rc geninfo_all_blocks=1 00:09:57.300 --rc geninfo_unexecuted_blocks=1 00:09:57.300 00:09:57.300 ' 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:57.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.300 --rc genhtml_branch_coverage=1 00:09:57.300 --rc genhtml_function_coverage=1 00:09:57.300 --rc genhtml_legend=1 00:09:57.300 --rc geninfo_all_blocks=1 00:09:57.300 --rc geninfo_unexecuted_blocks=1 00:09:57.300 00:09:57.300 ' 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:57.300 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:57.300 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:57.301 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:57.301 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:57.301 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:57.301 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:57.301 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:57.301 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:57.301 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:57.301 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:57.301 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:57.301 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:57.301 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:57.301 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:57.301 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:57.301 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:57.301 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:57.301 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:57.301 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:57.301 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:57.301 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:57.301 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:57.301 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:57.301 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:57.301 Cannot find device "nvmf_init_br" 00:09:57.301 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:09:57.301 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:57.301 Cannot find device "nvmf_init_br2" 00:09:57.301 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:09:57.301 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:57.301 Cannot find device "nvmf_tgt_br" 00:09:57.301 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:09:57.301 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:57.301 Cannot find device "nvmf_tgt_br2" 00:09:57.301 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:09:57.301 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:57.301 Cannot find device "nvmf_init_br" 00:09:57.301 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:09:57.301 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:57.301 Cannot find device "nvmf_init_br2" 00:09:57.301 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:09:57.301 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:57.301 Cannot find device "nvmf_tgt_br" 00:09:57.301 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:09:57.301 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:57.301 Cannot find device "nvmf_tgt_br2" 00:09:57.301 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:09:57.301 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:57.301 Cannot find device "nvmf_br" 00:09:57.301 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:09:57.301 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:57.301 Cannot find device "nvmf_init_if" 00:09:57.301 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:09:57.301 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:57.301 Cannot find device "nvmf_init_if2" 00:09:57.301 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:09:57.301 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:57.301 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:57.301 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:09:57.301 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:57.301 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:57.301 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:09:57.301 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:57.301 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:57.301 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:57.301 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:57.565 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:57.565 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:57.565 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:57.565 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:57.565 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:57.565 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:57.565 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:57.565 14:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:57.565 14:14:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:57.565 14:14:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:57.565 14:14:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:57.565 14:14:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:57.565 14:14:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:57.565 14:14:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:57.566 14:14:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:57.566 14:14:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:57.566 14:14:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:57.566 14:14:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:57.566 14:14:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:57.566 14:14:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:57.566 14:14:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:57.566 14:14:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:57.566 14:14:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:57.566 14:14:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:57.566 14:14:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:57.566 14:14:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:57.566 14:14:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:57.566 14:14:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:57.566 14:14:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:57.566 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:57.566 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:09:57.566 00:09:57.566 --- 10.0.0.3 ping statistics --- 00:09:57.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.566 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:09:57.566 14:14:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:57.566 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:57.566 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:09:57.566 00:09:57.566 --- 10.0.0.4 ping statistics --- 00:09:57.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.566 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:09:57.566 14:14:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:57.566 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:57.566 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:09:57.566 00:09:57.566 --- 10.0.0.1 ping statistics --- 00:09:57.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.566 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:09:57.566 14:14:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:57.566 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:57.566 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.040 ms 00:09:57.566 00:09:57.566 --- 10.0.0.2 ping statistics --- 00:09:57.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.566 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:09:57.566 14:14:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:57.566 14:14:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:09:57.566 14:14:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:57.566 14:14:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:57.566 14:14:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:57.566 14:14:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:57.566 14:14:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:57.566 14:14:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:57.566 14:14:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:57.566 14:14:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:57.566 14:14:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:57.566 14:14:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:57.566 14:14:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:57.566 14:14:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=70293 00:09:57.566 14:14:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 70293 00:09:57.566 14:14:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 70293 ']' 00:09:57.566 14:14:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:57.566 14:14:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:57.566 14:14:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:57.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:57.566 14:14:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:57.566 14:14:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:57.566 14:14:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:57.836 [2024-11-06 14:14:25.194844] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:09:57.836 [2024-11-06 14:14:25.194985] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:57.836 [2024-11-06 14:14:25.356035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:57.836 [2024-11-06 14:14:25.459188] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:57.836 [2024-11-06 14:14:25.459235] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:57.836 [2024-11-06 14:14:25.459247] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:57.836 [2024-11-06 14:14:25.459255] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:57.836 [2024-11-06 14:14:25.459264] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:57.836 [2024-11-06 14:14:25.460708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:57.836 [2024-11-06 14:14:25.461125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:57.836 [2024-11-06 14:14:25.461172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.836 [2024-11-06 14:14:25.461201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:58.402 14:14:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:58.402 14:14:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:09:58.402 14:14:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:58.402 14:14:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:58.402 14:14:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:58.402 14:14:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:58.402 14:14:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:58.402 14:14:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.402 14:14:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:58.402 [2024-11-06 14:14:25.991004] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:58.402 14:14:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.402 14:14:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:58.402 14:14:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.402 14:14:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:58.660 Malloc0 00:09:58.660 14:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.660 14:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:58.660 14:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.660 14:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:58.660 14:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.660 14:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:58.660 14:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.660 14:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:58.660 14:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.660 14:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:58.660 14:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.660 14:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:58.660 [2024-11-06 14:14:26.093434] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:58.660 14:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.660 test case1: single bdev can't be used in multiple subsystems 00:09:58.660 14:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:58.661 14:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:58.661 14:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.661 14:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:58.661 14:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.661 14:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:09:58.661 14:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.661 14:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:58.661 14:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.661 14:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:58.661 14:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:58.661 14:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.661 14:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:58.661 [2024-11-06 14:14:26.117612] bdev.c:8198:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:58.661 [2024-11-06 14:14:26.117726] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:58.661 [2024-11-06 14:14:26.117776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.661 2024/11/06 14:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:58.661 request: 00:09:58.661 { 00:09:58.661 "method": "nvmf_subsystem_add_ns", 00:09:58.661 "params": { 00:09:58.661 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:58.661 "namespace": { 00:09:58.661 "bdev_name": "Malloc0", 00:09:58.661 "no_auto_visible": false 00:09:58.661 } 00:09:58.661 } 00:09:58.661 } 00:09:58.661 Got JSON-RPC error response 00:09:58.661 GoRPCClient: error on JSON-RPC call 00:09:58.661 14:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:58.661 14:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:58.661 14:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:58.661 Adding namespace failed - expected result. 00:09:58.661 14:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:58.661 test case2: host connect to nvmf target in multiple paths 00:09:58.661 14:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:58.661 14:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:09:58.661 14:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.661 14:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:58.661 [2024-11-06 14:14:26.125342] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:09:58.661 14:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.661 14:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid=9ca3beaa-4765-4780-96a0-c04fd97e7435 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:09:58.919 14:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid=9ca3beaa-4765-4780-96a0-c04fd97e7435 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:09:58.919 14:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:58.919 14:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:09:58.919 14:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:09:58.919 14:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:09:58.919 14:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:10:01.444 14:14:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:10:01.444 14:14:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:10:01.444 14:14:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:10:01.444 14:14:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:10:01.444 14:14:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:10:01.444 14:14:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:10:01.445 14:14:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:01.445 [global] 00:10:01.445 thread=1 00:10:01.445 invalidate=1 00:10:01.445 rw=write 00:10:01.445 time_based=1 00:10:01.445 runtime=1 00:10:01.445 ioengine=libaio 00:10:01.445 direct=1 00:10:01.445 bs=4096 00:10:01.445 iodepth=1 00:10:01.445 norandommap=0 00:10:01.445 numjobs=1 00:10:01.445 00:10:01.445 verify_dump=1 00:10:01.445 verify_backlog=512 00:10:01.445 verify_state_save=0 00:10:01.445 do_verify=1 00:10:01.445 verify=crc32c-intel 00:10:01.445 [job0] 00:10:01.445 filename=/dev/nvme0n1 00:10:01.445 Could not set queue depth (nvme0n1) 00:10:01.445 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:01.445 fio-3.35 00:10:01.445 Starting 1 thread 00:10:02.377 00:10:02.377 job0: (groupid=0, jobs=1): err= 0: pid=70402: Wed Nov 6 14:14:29 2024 00:10:02.377 read: IOPS=4534, BW=17.7MiB/s (18.6MB/s)(17.7MiB/1001msec) 00:10:02.377 slat (usec): min=5, max=102, avg=10.45, stdev= 7.11 00:10:02.377 clat (usec): min=45, max=487, avg=107.98, stdev=18.56 00:10:02.377 lat (usec): min=92, max=494, avg=118.43, stdev=21.30 00:10:02.377 clat percentiles (usec): 00:10:02.377 | 1.00th=[ 91], 5.00th=[ 94], 10.00th=[ 96], 20.00th=[ 99], 00:10:02.377 | 30.00th=[ 101], 40.00th=[ 103], 50.00th=[ 105], 60.00th=[ 109], 00:10:02.377 | 70.00th=[ 112], 80.00th=[ 115], 90.00th=[ 120], 95.00th=[ 126], 00:10:02.377 | 99.00th=[ 159], 99.50th=[ 241], 99.90th=[ 383], 99.95th=[ 408], 00:10:02.377 | 99.99th=[ 490] 00:10:02.377 write: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec); 0 zone resets 00:10:02.377 slat (usec): min=9, max=110, avg=15.97, stdev= 9.93 00:10:02.377 clat (usec): min=54, max=517, avg=81.71, stdev=18.05 00:10:02.377 lat (usec): min=71, max=529, avg=97.68, stdev=23.06 00:10:02.377 clat percentiles (usec): 00:10:02.378 | 1.00th=[ 68], 5.00th=[ 71], 10.00th=[ 72], 20.00th=[ 74], 00:10:02.378 | 30.00th=[ 76], 40.00th=[ 78], 50.00th=[ 80], 60.00th=[ 82], 00:10:02.378 | 70.00th=[ 85], 80.00th=[ 88], 90.00th=[ 92], 95.00th=[ 97], 00:10:02.378 | 99.00th=[ 114], 99.50th=[ 161], 99.90th=[ 367], 99.95th=[ 437], 00:10:02.378 | 99.99th=[ 519] 00:10:02.378 bw ( KiB/s): min=20456, max=20456, per=100.00%, avg=20456.00, stdev= 0.00, samples=1 00:10:02.378 iops : min= 5114, max= 5114, avg=5114.00, stdev= 0.00, samples=1 00:10:02.378 lat (usec) : 50=0.01%, 100=61.27%, 250=38.45%, 500=0.26%, 750=0.01% 00:10:02.378 cpu : usr=1.60%, sys=10.40%, ctx=9147, majf=0, minf=5 00:10:02.378 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:02.378 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.378 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.378 issued rwts: total=4539,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:02.378 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:02.378 00:10:02.378 Run status group 0 (all jobs): 00:10:02.378 READ: bw=17.7MiB/s (18.6MB/s), 17.7MiB/s-17.7MiB/s (18.6MB/s-18.6MB/s), io=17.7MiB (18.6MB), run=1001-1001msec 00:10:02.378 WRITE: bw=18.0MiB/s (18.9MB/s), 18.0MiB/s-18.0MiB/s (18.9MB/s-18.9MB/s), io=18.0MiB (18.9MB), run=1001-1001msec 00:10:02.378 00:10:02.378 Disk stats (read/write): 00:10:02.378 nvme0n1: ios=4141/4096, merge=0/0, ticks=464/362, in_queue=826, util=90.98% 00:10:02.378 14:14:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:02.378 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:02.378 14:14:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:02.378 14:14:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:10:02.378 14:14:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:10:02.378 14:14:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:02.378 14:14:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:10:02.378 14:14:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:02.378 14:14:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:10:02.378 14:14:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:02.378 14:14:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:02.378 14:14:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:02.378 14:14:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:02.378 14:14:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:02.378 14:14:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:02.378 14:14:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:02.378 14:14:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:02.378 rmmod nvme_tcp 00:10:02.378 rmmod nvme_fabrics 00:10:02.378 rmmod nvme_keyring 00:10:02.378 14:14:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:02.378 14:14:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:02.378 14:14:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:02.378 14:14:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 70293 ']' 00:10:02.378 14:14:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 70293 00:10:02.378 14:14:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 70293 ']' 00:10:02.378 14:14:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 70293 00:10:02.378 14:14:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:10:02.378 14:14:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:02.378 14:14:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70293 00:10:02.378 14:14:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:02.378 14:14:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:02.378 killing process with pid 70293 00:10:02.378 14:14:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70293' 00:10:02.378 14:14:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 70293 00:10:02.378 14:14:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 70293 00:10:03.310 14:14:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:03.310 14:14:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:03.310 14:14:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:03.310 14:14:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:03.310 14:14:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:10:03.310 14:14:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:03.310 14:14:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:10:03.310 14:14:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:03.310 14:14:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:03.310 14:14:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:03.310 14:14:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:03.310 14:14:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:03.310 14:14:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:03.310 14:14:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:03.310 14:14:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:03.310 14:14:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:03.310 14:14:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:03.310 14:14:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:03.310 14:14:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:03.310 14:14:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:03.310 14:14:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:03.310 14:14:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:03.310 14:14:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:03.310 14:14:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:03.310 14:14:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:03.310 14:14:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:03.310 14:14:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:10:03.310 00:10:03.310 real 0m6.291s 00:10:03.310 user 0m20.251s 00:10:03.310 sys 0m1.149s 00:10:03.310 14:14:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:03.310 14:14:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:03.310 ************************************ 00:10:03.310 END TEST nvmf_nmic 00:10:03.310 ************************************ 00:10:03.569 14:14:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:03.569 14:14:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:03.569 14:14:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:03.569 14:14:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:03.569 ************************************ 00:10:03.569 START TEST nvmf_fio_target 00:10:03.569 ************************************ 00:10:03.569 14:14:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:03.569 * Looking for test storage... 00:10:03.569 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:03.569 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:03.569 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:10:03.570 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:03.570 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:03.570 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:03.570 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:03.570 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:03.570 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:03.570 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:03.570 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:03.570 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:03.570 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:03.570 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:03.570 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:03.570 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:03.570 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:03.570 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:03.570 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:03.570 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:03.570 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:03.570 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:03.570 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:03.570 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:03.570 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:03.570 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:03.570 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:03.570 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:03.570 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:03.570 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:03.570 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:03.570 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:03.570 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:03.570 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:03.570 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:03.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.570 --rc genhtml_branch_coverage=1 00:10:03.570 --rc genhtml_function_coverage=1 00:10:03.570 --rc genhtml_legend=1 00:10:03.570 --rc geninfo_all_blocks=1 00:10:03.570 --rc geninfo_unexecuted_blocks=1 00:10:03.570 00:10:03.570 ' 00:10:03.570 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:03.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.570 --rc genhtml_branch_coverage=1 00:10:03.570 --rc genhtml_function_coverage=1 00:10:03.570 --rc genhtml_legend=1 00:10:03.570 --rc geninfo_all_blocks=1 00:10:03.570 --rc geninfo_unexecuted_blocks=1 00:10:03.570 00:10:03.570 ' 00:10:03.570 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:03.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.570 --rc genhtml_branch_coverage=1 00:10:03.570 --rc genhtml_function_coverage=1 00:10:03.570 --rc genhtml_legend=1 00:10:03.570 --rc geninfo_all_blocks=1 00:10:03.570 --rc geninfo_unexecuted_blocks=1 00:10:03.570 00:10:03.570 ' 00:10:03.570 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:03.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.570 --rc genhtml_branch_coverage=1 00:10:03.570 --rc genhtml_function_coverage=1 00:10:03.570 --rc genhtml_legend=1 00:10:03.570 --rc geninfo_all_blocks=1 00:10:03.570 --rc geninfo_unexecuted_blocks=1 00:10:03.570 00:10:03.570 ' 00:10:03.570 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:03.570 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:03.570 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:03.570 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:03.570 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:03.570 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:03.570 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:03.570 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:03.570 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:03.570 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:03.570 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:03.570 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:03.570 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:10:03.570 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:10:03.570 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:03.570 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:03.570 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:03.570 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:03.570 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:03.570 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:03.570 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:03.570 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:03.570 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:03.570 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.570 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.570 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.570 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:03.570 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.570 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:03.570 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:03.570 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:03.570 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:03.570 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:03.570 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:03.570 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:03.570 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:03.570 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:03.570 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:03.570 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:03.570 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:03.570 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:03.571 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:03.571 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:03.571 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:03.571 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:03.571 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:03.571 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:03.571 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:03.571 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:03.571 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:03.571 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:03.571 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:03.571 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:03.571 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:03.571 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:03.571 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:03.571 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:03.571 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:03.571 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:03.571 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:03.571 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:03.571 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:03.571 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:03.571 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:03.571 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:03.571 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:03.571 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:03.571 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:03.571 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:03.571 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:03.571 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:03.571 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:03.571 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:03.571 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:03.571 Cannot find device "nvmf_init_br" 00:10:03.571 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:10:03.571 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:03.571 Cannot find device "nvmf_init_br2" 00:10:03.571 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:10:03.571 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:03.571 Cannot find device "nvmf_tgt_br" 00:10:03.571 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:10:03.571 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:03.571 Cannot find device "nvmf_tgt_br2" 00:10:03.571 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:10:03.571 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:03.571 Cannot find device "nvmf_init_br" 00:10:03.571 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:10:03.571 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:03.571 Cannot find device "nvmf_init_br2" 00:10:03.571 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:10:03.571 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:03.571 Cannot find device "nvmf_tgt_br" 00:10:03.571 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:10:03.571 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:03.829 Cannot find device "nvmf_tgt_br2" 00:10:03.829 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:10:03.829 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:03.829 Cannot find device "nvmf_br" 00:10:03.829 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:10:03.829 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:03.829 Cannot find device "nvmf_init_if" 00:10:03.829 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:10:03.829 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:03.829 Cannot find device "nvmf_init_if2" 00:10:03.829 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:10:03.829 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:03.829 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:03.829 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:10:03.829 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:03.829 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:03.829 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:10:03.829 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:03.829 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:03.829 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:03.829 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:03.829 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:03.829 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:03.829 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:03.829 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:03.829 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:03.829 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:03.829 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:03.829 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:03.829 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:03.829 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:03.829 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:03.829 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:03.829 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:03.829 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:03.829 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:03.829 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:03.829 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:03.829 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:03.829 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:03.829 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:03.829 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:03.829 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:03.829 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:03.829 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:03.829 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:03.829 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:03.829 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:03.829 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:03.830 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:03.830 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:03.830 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.104 ms 00:10:03.830 00:10:03.830 --- 10.0.0.3 ping statistics --- 00:10:03.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:03.830 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:10:03.830 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:03.830 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:03.830 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.027 ms 00:10:03.830 00:10:03.830 --- 10.0.0.4 ping statistics --- 00:10:03.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:03.830 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:10:03.830 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:03.830 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:03.830 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:10:03.830 00:10:03.830 --- 10.0.0.1 ping statistics --- 00:10:03.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:03.830 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:10:03.830 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:03.830 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:03.830 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:10:03.830 00:10:03.830 --- 10.0.0.2 ping statistics --- 00:10:03.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:03.830 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:10:03.830 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:03.830 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:10:03.830 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:03.830 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:03.830 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:03.830 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:03.830 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:03.830 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:03.830 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:03.830 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:03.830 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:03.830 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:03.830 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:03.830 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=70637 00:10:03.830 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:03.830 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 70637 00:10:04.087 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 70637 ']' 00:10:04.087 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:04.087 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:04.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:04.087 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:04.087 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:04.087 14:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:04.087 [2024-11-06 14:14:31.527022] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:10:04.087 [2024-11-06 14:14:31.527125] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:04.087 [2024-11-06 14:14:31.679041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:04.345 [2024-11-06 14:14:31.786011] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:04.345 [2024-11-06 14:14:31.786065] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:04.345 [2024-11-06 14:14:31.786076] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:04.345 [2024-11-06 14:14:31.786083] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:04.345 [2024-11-06 14:14:31.786091] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:04.345 [2024-11-06 14:14:31.787498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:04.345 [2024-11-06 14:14:31.788002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:04.345 [2024-11-06 14:14:31.788016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.345 [2024-11-06 14:14:31.788043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:04.909 14:14:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:04.909 14:14:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:10:04.909 14:14:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:04.909 14:14:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:04.909 14:14:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:04.909 14:14:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:04.909 14:14:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:05.168 [2024-11-06 14:14:32.550702] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:05.168 14:14:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:05.427 14:14:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:05.427 14:14:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:05.690 14:14:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:05.690 14:14:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:05.690 14:14:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:05.690 14:14:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:05.949 14:14:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:05.949 14:14:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:06.206 14:14:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:06.463 14:14:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:06.463 14:14:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:06.818 14:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:06.818 14:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:07.075 14:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:07.075 14:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:07.075 14:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:07.332 14:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:07.332 14:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:07.591 14:14:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:07.591 14:14:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:07.591 14:14:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:07.849 [2024-11-06 14:14:35.407417] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:07.849 14:14:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:08.107 14:14:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:08.365 14:14:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid=9ca3beaa-4765-4780-96a0-c04fd97e7435 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:10:08.622 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:08.622 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:10:08.622 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:10:08.622 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:10:08.622 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:10:08.622 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:10:10.521 14:14:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:10:10.521 14:14:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:10:10.521 14:14:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:10:10.521 14:14:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:10:10.521 14:14:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:10:10.521 14:14:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:10:10.521 14:14:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:10.521 [global] 00:10:10.521 thread=1 00:10:10.521 invalidate=1 00:10:10.521 rw=write 00:10:10.521 time_based=1 00:10:10.521 runtime=1 00:10:10.521 ioengine=libaio 00:10:10.521 direct=1 00:10:10.521 bs=4096 00:10:10.521 iodepth=1 00:10:10.521 norandommap=0 00:10:10.521 numjobs=1 00:10:10.521 00:10:10.521 verify_dump=1 00:10:10.521 verify_backlog=512 00:10:10.521 verify_state_save=0 00:10:10.521 do_verify=1 00:10:10.521 verify=crc32c-intel 00:10:10.521 [job0] 00:10:10.521 filename=/dev/nvme0n1 00:10:10.521 [job1] 00:10:10.521 filename=/dev/nvme0n2 00:10:10.521 [job2] 00:10:10.521 filename=/dev/nvme0n3 00:10:10.521 [job3] 00:10:10.521 filename=/dev/nvme0n4 00:10:10.521 Could not set queue depth (nvme0n1) 00:10:10.521 Could not set queue depth (nvme0n2) 00:10:10.521 Could not set queue depth (nvme0n3) 00:10:10.521 Could not set queue depth (nvme0n4) 00:10:10.778 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:10.778 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:10.778 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:10.778 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:10.778 fio-3.35 00:10:10.778 Starting 4 threads 00:10:12.196 00:10:12.196 job0: (groupid=0, jobs=1): err= 0: pid=70928: Wed Nov 6 14:14:39 2024 00:10:12.196 read: IOPS=4339, BW=17.0MiB/s (17.8MB/s)(17.0MiB/1001msec) 00:10:12.196 slat (nsec): min=5340, max=67760, avg=7480.02, stdev=3820.07 00:10:12.196 clat (usec): min=91, max=316, avg=113.02, stdev=12.45 00:10:12.196 lat (usec): min=98, max=329, avg=120.50, stdev=13.78 00:10:12.196 clat percentiles (usec): 00:10:12.196 | 1.00th=[ 98], 5.00th=[ 101], 10.00th=[ 102], 20.00th=[ 105], 00:10:12.196 | 30.00th=[ 108], 40.00th=[ 110], 50.00th=[ 111], 60.00th=[ 114], 00:10:12.196 | 70.00th=[ 116], 80.00th=[ 120], 90.00th=[ 126], 95.00th=[ 133], 00:10:12.196 | 99.00th=[ 163], 99.50th=[ 178], 99.90th=[ 233], 99.95th=[ 241], 00:10:12.196 | 99.99th=[ 318] 00:10:12.196 write: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec); 0 zone resets 00:10:12.197 slat (usec): min=9, max=126, avg=12.18, stdev= 7.05 00:10:12.197 clat (usec): min=69, max=463, avg=89.34, stdev=14.01 00:10:12.197 lat (usec): min=80, max=473, avg=101.51, stdev=17.26 00:10:12.197 clat percentiles (usec): 00:10:12.197 | 1.00th=[ 75], 5.00th=[ 78], 10.00th=[ 79], 20.00th=[ 81], 00:10:12.197 | 30.00th=[ 83], 40.00th=[ 85], 50.00th=[ 87], 60.00th=[ 89], 00:10:12.197 | 70.00th=[ 91], 80.00th=[ 96], 90.00th=[ 103], 95.00th=[ 112], 00:10:12.197 | 99.00th=[ 135], 99.50th=[ 149], 99.90th=[ 239], 99.95th=[ 269], 00:10:12.197 | 99.99th=[ 465] 00:10:12.197 bw ( KiB/s): min=18768, max=18768, per=33.67%, avg=18768.00, stdev= 0.00, samples=1 00:10:12.197 iops : min= 4692, max= 4692, avg=4692.00, stdev= 0.00, samples=1 00:10:12.197 lat (usec) : 100=46.62%, 250=53.33%, 500=0.06% 00:10:12.197 cpu : usr=1.90%, sys=6.90%, ctx=8953, majf=0, minf=15 00:10:12.197 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:12.197 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.197 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.197 issued rwts: total=4344,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:12.197 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:12.197 job1: (groupid=0, jobs=1): err= 0: pid=70929: Wed Nov 6 14:14:39 2024 00:10:12.197 read: IOPS=3849, BW=15.0MiB/s (15.8MB/s)(15.1MiB/1001msec) 00:10:12.197 slat (nsec): min=5850, max=69444, avg=8317.93, stdev=5349.75 00:10:12.197 clat (usec): min=92, max=473, avg=129.51, stdev=19.18 00:10:12.197 lat (usec): min=98, max=479, avg=137.83, stdev=21.30 00:10:12.197 clat percentiles (usec): 00:10:12.197 | 1.00th=[ 99], 5.00th=[ 103], 10.00th=[ 108], 20.00th=[ 118], 00:10:12.197 | 30.00th=[ 124], 40.00th=[ 127], 50.00th=[ 129], 60.00th=[ 133], 00:10:12.197 | 70.00th=[ 137], 80.00th=[ 141], 90.00th=[ 147], 95.00th=[ 155], 00:10:12.197 | 99.00th=[ 180], 99.50th=[ 217], 99.90th=[ 318], 99.95th=[ 392], 00:10:12.197 | 99.99th=[ 474] 00:10:12.197 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:10:12.197 slat (usec): min=9, max=117, avg=11.56, stdev= 5.14 00:10:12.197 clat (usec): min=70, max=438, avg=100.88, stdev=17.13 00:10:12.197 lat (usec): min=79, max=452, avg=112.44, stdev=18.92 00:10:12.197 clat percentiles (usec): 00:10:12.197 | 1.00th=[ 76], 5.00th=[ 80], 10.00th=[ 83], 20.00th=[ 91], 00:10:12.197 | 30.00th=[ 95], 40.00th=[ 98], 50.00th=[ 100], 60.00th=[ 102], 00:10:12.197 | 70.00th=[ 105], 80.00th=[ 110], 90.00th=[ 117], 95.00th=[ 123], 00:10:12.197 | 99.00th=[ 145], 99.50th=[ 159], 99.90th=[ 306], 99.95th=[ 347], 00:10:12.197 | 99.99th=[ 441] 00:10:12.198 bw ( KiB/s): min=16384, max=16384, per=29.39%, avg=16384.00, stdev= 0.00, samples=1 00:10:12.198 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:10:12.198 lat (usec) : 100=27.05%, 250=72.69%, 500=0.26% 00:10:12.198 cpu : usr=2.00%, sys=5.80%, ctx=7950, majf=0, minf=11 00:10:12.198 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:12.198 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.198 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.198 issued rwts: total=3853,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:12.198 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:12.198 job2: (groupid=0, jobs=1): err= 0: pid=70930: Wed Nov 6 14:14:39 2024 00:10:12.198 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:10:12.198 slat (nsec): min=5037, max=26171, avg=6605.55, stdev=1823.31 00:10:12.198 clat (usec): min=124, max=317, avg=202.74, stdev=21.64 00:10:12.198 lat (usec): min=129, max=323, avg=209.35, stdev=21.73 00:10:12.198 clat percentiles (usec): 00:10:12.198 | 1.00th=[ 135], 5.00th=[ 184], 10.00th=[ 188], 20.00th=[ 192], 00:10:12.198 | 30.00th=[ 196], 40.00th=[ 198], 50.00th=[ 200], 60.00th=[ 204], 00:10:12.198 | 70.00th=[ 208], 80.00th=[ 212], 90.00th=[ 221], 95.00th=[ 237], 00:10:12.198 | 99.00th=[ 281], 99.50th=[ 293], 99.90th=[ 310], 99.95th=[ 310], 00:10:12.198 | 99.99th=[ 318] 00:10:12.198 write: IOPS=2620, BW=10.2MiB/s (10.7MB/s)(10.2MiB/1001msec); 0 zone resets 00:10:12.198 slat (usec): min=5, max=118, avg=11.93, stdev= 4.35 00:10:12.198 clat (usec): min=79, max=5049, avg=163.07, stdev=118.82 00:10:12.198 lat (usec): min=90, max=5059, avg=175.01, stdev=118.74 00:10:12.198 clat percentiles (usec): 00:10:12.198 | 1.00th=[ 110], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 149], 00:10:12.198 | 30.00th=[ 153], 40.00th=[ 155], 50.00th=[ 157], 60.00th=[ 161], 00:10:12.198 | 70.00th=[ 163], 80.00th=[ 167], 90.00th=[ 176], 95.00th=[ 184], 00:10:12.198 | 99.00th=[ 227], 99.50th=[ 245], 99.90th=[ 1532], 99.95th=[ 3195], 00:10:12.198 | 99.99th=[ 5080] 00:10:12.198 bw ( KiB/s): min=12288, max=12288, per=22.04%, avg=12288.00, stdev= 0.00, samples=1 00:10:12.198 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:12.198 lat (usec) : 100=0.39%, 250=97.41%, 500=2.12% 00:10:12.198 lat (msec) : 2=0.04%, 4=0.02%, 10=0.02% 00:10:12.198 cpu : usr=1.10%, sys=3.90%, ctx=5184, majf=0, minf=13 00:10:12.199 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:12.199 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.199 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.199 issued rwts: total=2560,2623,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:12.199 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:12.199 job3: (groupid=0, jobs=1): err= 0: pid=70931: Wed Nov 6 14:14:39 2024 00:10:12.199 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:10:12.199 slat (nsec): min=5063, max=26546, avg=6566.12, stdev=1710.95 00:10:12.199 clat (usec): min=123, max=324, avg=202.72, stdev=21.78 00:10:12.199 lat (usec): min=128, max=331, avg=209.29, stdev=21.87 00:10:12.199 clat percentiles (usec): 00:10:12.199 | 1.00th=[ 133], 5.00th=[ 184], 10.00th=[ 188], 20.00th=[ 192], 00:10:12.199 | 30.00th=[ 196], 40.00th=[ 198], 50.00th=[ 200], 60.00th=[ 204], 00:10:12.199 | 70.00th=[ 208], 80.00th=[ 212], 90.00th=[ 223], 95.00th=[ 233], 00:10:12.199 | 99.00th=[ 285], 99.50th=[ 293], 99.90th=[ 314], 99.95th=[ 322], 00:10:12.199 | 99.99th=[ 326] 00:10:12.199 write: IOPS=2620, BW=10.2MiB/s (10.7MB/s)(10.2MiB/1001msec); 0 zone resets 00:10:12.199 slat (nsec): min=5712, max=67818, avg=11878.00, stdev=3904.10 00:10:12.199 clat (usec): min=77, max=5093, avg=163.18, stdev=120.46 00:10:12.199 lat (usec): min=88, max=5111, avg=175.06, stdev=120.55 00:10:12.199 clat percentiles (usec): 00:10:12.199 | 1.00th=[ 100], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 151], 00:10:12.199 | 30.00th=[ 153], 40.00th=[ 155], 50.00th=[ 157], 60.00th=[ 161], 00:10:12.199 | 70.00th=[ 163], 80.00th=[ 167], 90.00th=[ 176], 95.00th=[ 184], 00:10:12.199 | 99.00th=[ 227], 99.50th=[ 245], 99.90th=[ 1598], 99.95th=[ 3228], 00:10:12.199 | 99.99th=[ 5080] 00:10:12.203 bw ( KiB/s): min=12288, max=12288, per=22.04%, avg=12288.00, stdev= 0.00, samples=1 00:10:12.203 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:12.203 lat (usec) : 100=0.52%, 250=97.43%, 500=1.95%, 750=0.02% 00:10:12.203 lat (msec) : 2=0.04%, 4=0.02%, 10=0.02% 00:10:12.203 cpu : usr=0.80%, sys=4.20%, ctx=5188, majf=0, minf=7 00:10:12.203 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:12.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.203 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.203 issued rwts: total=2560,2623,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:12.203 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:12.203 00:10:12.203 Run status group 0 (all jobs): 00:10:12.203 READ: bw=52.0MiB/s (54.5MB/s), 9.99MiB/s-17.0MiB/s (10.5MB/s-17.8MB/s), io=52.0MiB (54.5MB), run=1001-1001msec 00:10:12.203 WRITE: bw=54.4MiB/s (57.1MB/s), 10.2MiB/s-18.0MiB/s (10.7MB/s-18.9MB/s), io=54.5MiB (57.1MB), run=1001-1001msec 00:10:12.203 00:10:12.203 Disk stats (read/write): 00:10:12.203 nvme0n1: ios=3838/4096, merge=0/0, ticks=440/375, in_queue=815, util=89.48% 00:10:12.203 nvme0n2: ios=3338/3584, merge=0/0, ticks=439/379, in_queue=818, util=89.83% 00:10:12.203 nvme0n3: ios=2055/2560, merge=0/0, ticks=414/412, in_queue=826, util=89.52% 00:10:12.203 nvme0n4: ios=2066/2560, merge=0/0, ticks=452/414, in_queue=866, util=90.00% 00:10:12.203 14:14:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:12.203 [global] 00:10:12.203 thread=1 00:10:12.203 invalidate=1 00:10:12.203 rw=randwrite 00:10:12.203 time_based=1 00:10:12.203 runtime=1 00:10:12.203 ioengine=libaio 00:10:12.203 direct=1 00:10:12.203 bs=4096 00:10:12.203 iodepth=1 00:10:12.203 norandommap=0 00:10:12.203 numjobs=1 00:10:12.203 00:10:12.203 verify_dump=1 00:10:12.203 verify_backlog=512 00:10:12.203 verify_state_save=0 00:10:12.203 do_verify=1 00:10:12.203 verify=crc32c-intel 00:10:12.203 [job0] 00:10:12.203 filename=/dev/nvme0n1 00:10:12.203 [job1] 00:10:12.203 filename=/dev/nvme0n2 00:10:12.203 [job2] 00:10:12.203 filename=/dev/nvme0n3 00:10:12.203 [job3] 00:10:12.203 filename=/dev/nvme0n4 00:10:12.203 Could not set queue depth (nvme0n1) 00:10:12.203 Could not set queue depth (nvme0n2) 00:10:12.203 Could not set queue depth (nvme0n3) 00:10:12.203 Could not set queue depth (nvme0n4) 00:10:12.203 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:12.203 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:12.203 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:12.204 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:12.204 fio-3.35 00:10:12.204 Starting 4 threads 00:10:13.138 00:10:13.138 job0: (groupid=0, jobs=1): err= 0: pid=70984: Wed Nov 6 14:14:40 2024 00:10:13.138 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:10:13.138 slat (nsec): min=5911, max=27329, avg=6488.14, stdev=1156.18 00:10:13.138 clat (usec): min=111, max=328, avg=135.01, stdev=13.89 00:10:13.138 lat (usec): min=118, max=335, avg=141.50, stdev=13.98 00:10:13.138 clat percentiles (usec): 00:10:13.138 | 1.00th=[ 116], 5.00th=[ 120], 10.00th=[ 122], 20.00th=[ 126], 00:10:13.138 | 30.00th=[ 128], 40.00th=[ 130], 50.00th=[ 133], 60.00th=[ 137], 00:10:13.138 | 70.00th=[ 139], 80.00th=[ 143], 90.00th=[ 151], 95.00th=[ 159], 00:10:13.138 | 99.00th=[ 182], 99.50th=[ 192], 99.90th=[ 262], 99.95th=[ 322], 00:10:13.138 | 99.99th=[ 330] 00:10:13.138 write: IOPS=4055, BW=15.8MiB/s (16.6MB/s)(15.9MiB/1001msec); 0 zone resets 00:10:13.138 slat (usec): min=7, max=144, avg=10.58, stdev= 4.07 00:10:13.138 clat (usec): min=85, max=424, avg=109.25, stdev=17.07 00:10:13.138 lat (usec): min=95, max=434, avg=119.84, stdev=18.08 00:10:13.138 clat percentiles (usec): 00:10:13.138 | 1.00th=[ 91], 5.00th=[ 94], 10.00th=[ 96], 20.00th=[ 99], 00:10:13.138 | 30.00th=[ 101], 40.00th=[ 103], 50.00th=[ 106], 60.00th=[ 110], 00:10:13.138 | 70.00th=[ 113], 80.00th=[ 118], 90.00th=[ 125], 95.00th=[ 135], 00:10:13.138 | 99.00th=[ 161], 99.50th=[ 184], 99.90th=[ 310], 99.95th=[ 338], 00:10:13.138 | 99.99th=[ 424] 00:10:13.138 bw ( KiB/s): min=16384, max=16384, per=32.16%, avg=16384.00, stdev= 0.00, samples=1 00:10:13.138 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:10:13.138 lat (usec) : 100=13.20%, 250=86.60%, 500=0.20% 00:10:13.138 cpu : usr=1.20%, sys=5.30%, ctx=7644, majf=0, minf=21 00:10:13.138 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:13.138 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.138 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.138 issued rwts: total=3584,4060,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:13.138 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:13.138 job1: (groupid=0, jobs=1): err= 0: pid=70985: Wed Nov 6 14:14:40 2024 00:10:13.138 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:10:13.138 slat (nsec): min=4163, max=22179, avg=5855.53, stdev=1206.23 00:10:13.138 clat (usec): min=105, max=3667, avg=207.52, stdev=103.90 00:10:13.138 lat (usec): min=111, max=3673, avg=213.38, stdev=103.93 00:10:13.138 clat percentiles (usec): 00:10:13.138 | 1.00th=[ 125], 5.00th=[ 174], 10.00th=[ 182], 20.00th=[ 190], 00:10:13.138 | 30.00th=[ 194], 40.00th=[ 198], 50.00th=[ 202], 60.00th=[ 206], 00:10:13.138 | 70.00th=[ 210], 80.00th=[ 217], 90.00th=[ 229], 95.00th=[ 249], 00:10:13.138 | 99.00th=[ 297], 99.50th=[ 322], 99.90th=[ 1336], 99.95th=[ 3621], 00:10:13.138 | 99.99th=[ 3654] 00:10:13.138 write: IOPS=2588, BW=10.1MiB/s (10.6MB/s)(10.1MiB/1001msec); 0 zone resets 00:10:13.138 slat (usec): min=5, max=207, avg=10.20, stdev= 5.86 00:10:13.138 clat (usec): min=75, max=398, avg=163.32, stdev=21.31 00:10:13.138 lat (usec): min=85, max=422, avg=173.52, stdev=21.74 00:10:13.138 clat percentiles (usec): 00:10:13.138 | 1.00th=[ 97], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 151], 00:10:13.138 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 165], 00:10:13.138 | 70.00th=[ 169], 80.00th=[ 176], 90.00th=[ 186], 95.00th=[ 196], 00:10:13.138 | 99.00th=[ 237], 99.50th=[ 255], 99.90th=[ 289], 99.95th=[ 347], 00:10:13.138 | 99.99th=[ 400] 00:10:13.138 bw ( KiB/s): min=12288, max=12288, per=24.12%, avg=12288.00, stdev= 0.00, samples=1 00:10:13.138 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:13.138 lat (usec) : 100=0.66%, 250=96.64%, 500=2.58%, 750=0.04%, 1000=0.02% 00:10:13.138 lat (msec) : 2=0.02%, 4=0.04% 00:10:13.138 cpu : usr=1.20%, sys=3.10%, ctx=5159, majf=0, minf=7 00:10:13.138 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:13.138 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.138 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.138 issued rwts: total=2560,2591,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:13.138 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:13.138 job2: (groupid=0, jobs=1): err= 0: pid=70987: Wed Nov 6 14:14:40 2024 00:10:13.138 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:10:13.138 slat (usec): min=4, max=151, avg= 5.97, stdev= 3.15 00:10:13.138 clat (usec): min=30, max=3665, avg=206.43, stdev=100.98 00:10:13.138 lat (usec): min=117, max=3672, avg=212.40, stdev=100.94 00:10:13.138 clat percentiles (usec): 00:10:13.138 | 1.00th=[ 133], 5.00th=[ 172], 10.00th=[ 182], 20.00th=[ 190], 00:10:13.138 | 30.00th=[ 194], 40.00th=[ 198], 50.00th=[ 202], 60.00th=[ 206], 00:10:13.139 | 70.00th=[ 210], 80.00th=[ 217], 90.00th=[ 227], 95.00th=[ 245], 00:10:13.139 | 99.00th=[ 293], 99.50th=[ 306], 99.90th=[ 930], 99.95th=[ 3621], 00:10:13.139 | 99.99th=[ 3654] 00:10:13.139 write: IOPS=2606, BW=10.2MiB/s (10.7MB/s)(10.2MiB/1001msec); 0 zone resets 00:10:13.139 slat (nsec): min=5671, max=65723, avg=10315.90, stdev=4336.56 00:10:13.139 clat (usec): min=85, max=395, avg=162.96, stdev=21.02 00:10:13.139 lat (usec): min=95, max=404, avg=173.28, stdev=20.92 00:10:13.139 clat percentiles (usec): 00:10:13.139 | 1.00th=[ 97], 5.00th=[ 139], 10.00th=[ 145], 20.00th=[ 151], 00:10:13.139 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 161], 60.00th=[ 165], 00:10:13.139 | 70.00th=[ 169], 80.00th=[ 176], 90.00th=[ 186], 95.00th=[ 200], 00:10:13.139 | 99.00th=[ 227], 99.50th=[ 239], 99.90th=[ 293], 99.95th=[ 343], 00:10:13.139 | 99.99th=[ 396] 00:10:13.139 bw ( KiB/s): min=12288, max=12288, per=24.12%, avg=12288.00, stdev= 0.00, samples=1 00:10:13.139 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:13.139 lat (usec) : 50=0.02%, 100=0.58%, 250=97.14%, 500=2.17%, 750=0.04% 00:10:13.139 lat (usec) : 1000=0.02% 00:10:13.139 lat (msec) : 4=0.04% 00:10:13.139 cpu : usr=0.70%, sys=3.70%, ctx=5174, majf=0, minf=10 00:10:13.139 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:13.139 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.139 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.139 issued rwts: total=2560,2609,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:13.139 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:13.139 job3: (groupid=0, jobs=1): err= 0: pid=70988: Wed Nov 6 14:14:40 2024 00:10:13.139 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:10:13.139 slat (nsec): min=7644, max=30516, avg=8678.22, stdev=1504.30 00:10:13.139 clat (usec): min=131, max=1530, avg=157.44, stdev=29.18 00:10:13.139 lat (usec): min=139, max=1539, avg=166.12, stdev=29.26 00:10:13.139 clat percentiles (usec): 00:10:13.139 | 1.00th=[ 137], 5.00th=[ 143], 10.00th=[ 145], 20.00th=[ 147], 00:10:13.139 | 30.00th=[ 151], 40.00th=[ 153], 50.00th=[ 155], 60.00th=[ 157], 00:10:13.139 | 70.00th=[ 161], 80.00th=[ 165], 90.00th=[ 172], 95.00th=[ 178], 00:10:13.139 | 99.00th=[ 196], 99.50th=[ 204], 99.90th=[ 363], 99.95th=[ 453], 00:10:13.139 | 99.99th=[ 1532] 00:10:13.139 write: IOPS=3485, BW=13.6MiB/s (14.3MB/s)(13.6MiB/1001msec); 0 zone resets 00:10:13.139 slat (nsec): min=11414, max=97578, avg=13633.08, stdev=4361.92 00:10:13.139 clat (usec): min=100, max=823, avg=125.31, stdev=18.61 00:10:13.139 lat (usec): min=113, max=856, avg=138.94, stdev=19.85 00:10:13.139 clat percentiles (usec): 00:10:13.139 | 1.00th=[ 109], 5.00th=[ 112], 10.00th=[ 114], 20.00th=[ 117], 00:10:13.139 | 30.00th=[ 119], 40.00th=[ 121], 50.00th=[ 124], 60.00th=[ 126], 00:10:13.139 | 70.00th=[ 129], 80.00th=[ 133], 90.00th=[ 139], 95.00th=[ 145], 00:10:13.139 | 99.00th=[ 159], 99.50th=[ 169], 99.90th=[ 388], 99.95th=[ 453], 00:10:13.139 | 99.99th=[ 824] 00:10:13.139 bw ( KiB/s): min=13736, max=14168, per=27.39%, avg=13952.00, stdev=305.47, samples=2 00:10:13.139 iops : min= 3434, max= 3542, avg=3488.00, stdev=76.37, samples=2 00:10:13.139 lat (usec) : 250=99.76%, 500=0.21%, 1000=0.02% 00:10:13.139 lat (msec) : 2=0.02% 00:10:13.139 cpu : usr=1.60%, sys=4.80%, ctx=6561, majf=0, minf=13 00:10:13.139 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:13.139 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.139 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.139 issued rwts: total=3072,3489,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:13.139 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:13.139 00:10:13.139 Run status group 0 (all jobs): 00:10:13.139 READ: bw=46.0MiB/s (48.2MB/s), 9.99MiB/s-14.0MiB/s (10.5MB/s-14.7MB/s), io=46.0MiB (48.2MB), run=1001-1001msec 00:10:13.139 WRITE: bw=49.8MiB/s (52.2MB/s), 10.1MiB/s-15.8MiB/s (10.6MB/s-16.6MB/s), io=49.8MiB (52.2MB), run=1001-1001msec 00:10:13.139 00:10:13.139 Disk stats (read/write): 00:10:13.139 nvme0n1: ios=3194/3584, merge=0/0, ticks=456/405, in_queue=861, util=89.57% 00:10:13.139 nvme0n2: ios=2092/2539, merge=0/0, ticks=429/416, in_queue=845, util=89.62% 00:10:13.139 nvme0n3: ios=2075/2557, merge=0/0, ticks=419/424, in_queue=843, util=89.63% 00:10:13.139 nvme0n4: ios=2717/3072, merge=0/0, ticks=432/402, in_queue=834, util=89.91% 00:10:13.139 14:14:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:13.139 [global] 00:10:13.139 thread=1 00:10:13.139 invalidate=1 00:10:13.139 rw=write 00:10:13.139 time_based=1 00:10:13.139 runtime=1 00:10:13.139 ioengine=libaio 00:10:13.139 direct=1 00:10:13.139 bs=4096 00:10:13.139 iodepth=128 00:10:13.139 norandommap=0 00:10:13.139 numjobs=1 00:10:13.139 00:10:13.139 verify_dump=1 00:10:13.139 verify_backlog=512 00:10:13.139 verify_state_save=0 00:10:13.139 do_verify=1 00:10:13.139 verify=crc32c-intel 00:10:13.139 [job0] 00:10:13.139 filename=/dev/nvme0n1 00:10:13.139 [job1] 00:10:13.139 filename=/dev/nvme0n2 00:10:13.139 [job2] 00:10:13.139 filename=/dev/nvme0n3 00:10:13.139 [job3] 00:10:13.139 filename=/dev/nvme0n4 00:10:13.139 Could not set queue depth (nvme0n1) 00:10:13.139 Could not set queue depth (nvme0n2) 00:10:13.139 Could not set queue depth (nvme0n3) 00:10:13.139 Could not set queue depth (nvme0n4) 00:10:13.397 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:13.397 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:13.397 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:13.397 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:13.397 fio-3.35 00:10:13.397 Starting 4 threads 00:10:14.772 00:10:14.772 job0: (groupid=0, jobs=1): err= 0: pid=71041: Wed Nov 6 14:14:42 2024 00:10:14.772 read: IOPS=2538, BW=9.92MiB/s (10.4MB/s)(9.98MiB/1006msec) 00:10:14.772 slat (usec): min=2, max=12600, avg=208.24, stdev=920.44 00:10:14.772 clat (usec): min=5256, max=41226, avg=26016.48, stdev=5174.30 00:10:14.772 lat (usec): min=10531, max=41236, avg=26224.72, stdev=5146.90 00:10:14.772 clat percentiles (usec): 00:10:14.772 | 1.00th=[10814], 5.00th=[20055], 10.00th=[21365], 20.00th=[22676], 00:10:14.772 | 30.00th=[22938], 40.00th=[23200], 50.00th=[23725], 60.00th=[25297], 00:10:14.772 | 70.00th=[28967], 80.00th=[31065], 90.00th=[33817], 95.00th=[34866], 00:10:14.772 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:14.772 | 99.99th=[41157] 00:10:14.772 write: IOPS=2544, BW=9.94MiB/s (10.4MB/s)(10.0MiB/1006msec); 0 zone resets 00:10:14.772 slat (usec): min=6, max=20838, avg=178.43, stdev=982.39 00:10:14.772 clat (usec): min=14321, max=46283, avg=23185.04, stdev=5659.98 00:10:14.772 lat (usec): min=17536, max=46297, avg=23363.47, stdev=5615.94 00:10:14.772 clat percentiles (usec): 00:10:14.772 | 1.00th=[15401], 5.00th=[18744], 10.00th=[19268], 20.00th=[19530], 00:10:14.772 | 30.00th=[19792], 40.00th=[20055], 50.00th=[20579], 60.00th=[21627], 00:10:14.772 | 70.00th=[23725], 80.00th=[26084], 90.00th=[30278], 95.00th=[36963], 00:10:14.772 | 99.00th=[46400], 99.50th=[46400], 99.90th=[46400], 99.95th=[46400], 00:10:14.772 | 99.99th=[46400] 00:10:14.772 bw ( KiB/s): min= 9724, max=10736, per=16.83%, avg=10230.00, stdev=715.59, samples=2 00:10:14.772 iops : min= 2431, max= 2684, avg=2557.50, stdev=178.90, samples=2 00:10:14.772 lat (msec) : 10=0.02%, 20=20.16%, 50=79.82% 00:10:14.772 cpu : usr=0.80%, sys=4.98%, ctx=588, majf=0, minf=7 00:10:14.772 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:10:14.772 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:14.772 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:14.772 issued rwts: total=2554,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:14.773 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:14.773 job1: (groupid=0, jobs=1): err= 0: pid=71042: Wed Nov 6 14:14:42 2024 00:10:14.773 read: IOPS=1540, BW=6162KiB/s (6310kB/s)(6236KiB/1012msec) 00:10:14.773 slat (usec): min=4, max=10414, avg=190.26, stdev=967.13 00:10:14.773 clat (usec): min=10609, max=48771, avg=21480.50, stdev=6678.63 00:10:14.773 lat (usec): min=12236, max=51447, avg=21670.77, stdev=6770.26 00:10:14.773 clat percentiles (usec): 00:10:14.773 | 1.00th=[13042], 5.00th=[14353], 10.00th=[15664], 20.00th=[16450], 00:10:14.773 | 30.00th=[17433], 40.00th=[17957], 50.00th=[18482], 60.00th=[21890], 00:10:14.773 | 70.00th=[23725], 80.00th=[25822], 90.00th=[31327], 95.00th=[34341], 00:10:14.773 | 99.00th=[45351], 99.50th=[48497], 99.90th=[49021], 99.95th=[49021], 00:10:14.773 | 99.99th=[49021] 00:10:14.773 write: IOPS=2023, BW=8095KiB/s (8289kB/s)(8192KiB/1012msec); 0 zone resets 00:10:14.773 slat (usec): min=7, max=14878, avg=339.01, stdev=1188.88 00:10:14.773 clat (usec): min=16442, max=86166, avg=46039.94, stdev=19131.12 00:10:14.773 lat (usec): min=16479, max=86183, avg=46378.95, stdev=19254.65 00:10:14.773 clat percentiles (usec): 00:10:14.773 | 1.00th=[16581], 5.00th=[16712], 10.00th=[16909], 20.00th=[25297], 00:10:14.773 | 30.00th=[36439], 40.00th=[43254], 50.00th=[45876], 60.00th=[51119], 00:10:14.773 | 70.00th=[57410], 80.00th=[63701], 90.00th=[72877], 95.00th=[76022], 00:10:14.773 | 99.00th=[83362], 99.50th=[84411], 99.90th=[86508], 99.95th=[86508], 00:10:14.773 | 99.99th=[86508] 00:10:14.773 bw ( KiB/s): min= 7345, max= 8192, per=12.78%, avg=7768.50, stdev=598.92, samples=2 00:10:14.773 iops : min= 1836, max= 2048, avg=1942.00, stdev=149.91, samples=2 00:10:14.773 lat (msec) : 20=33.24%, 50=43.30%, 100=23.45% 00:10:14.773 cpu : usr=1.29%, sys=3.56%, ctx=242, majf=0, minf=10 00:10:14.773 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:10:14.773 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:14.773 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:14.773 issued rwts: total=1559,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:14.773 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:14.773 job2: (groupid=0, jobs=1): err= 0: pid=71043: Wed Nov 6 14:14:42 2024 00:10:14.773 read: IOPS=8183, BW=32.0MiB/s (33.5MB/s)(32.0MiB/1001msec) 00:10:14.773 slat (usec): min=2, max=1959, avg=60.46, stdev=243.65 00:10:14.773 clat (usec): min=2212, max=9635, avg=7911.66, stdev=671.98 00:10:14.773 lat (usec): min=2219, max=9764, avg=7972.12, stdev=663.56 00:10:14.773 clat percentiles (usec): 00:10:14.773 | 1.00th=[ 6194], 5.00th=[ 6980], 10.00th=[ 7177], 20.00th=[ 7439], 00:10:14.773 | 30.00th=[ 7701], 40.00th=[ 7832], 50.00th=[ 7963], 60.00th=[ 8029], 00:10:14.773 | 70.00th=[ 8225], 80.00th=[ 8455], 90.00th=[ 8586], 95.00th=[ 8848], 00:10:14.773 | 99.00th=[ 9241], 99.50th=[ 9372], 99.90th=[ 9503], 99.95th=[ 9503], 00:10:14.773 | 99.99th=[ 9634] 00:10:14.773 write: IOPS=8200, BW=32.0MiB/s (33.6MB/s)(32.1MiB/1001msec); 0 zone resets 00:10:14.773 slat (usec): min=6, max=1788, avg=57.46, stdev=244.71 00:10:14.773 clat (usec): min=381, max=9298, avg=7529.37, stdev=642.82 00:10:14.773 lat (usec): min=393, max=9309, avg=7586.83, stdev=609.86 00:10:14.773 clat percentiles (usec): 00:10:14.773 | 1.00th=[ 5997], 5.00th=[ 6259], 10.00th=[ 6587], 20.00th=[ 7177], 00:10:14.773 | 30.00th=[ 7439], 40.00th=[ 7504], 50.00th=[ 7635], 60.00th=[ 7701], 00:10:14.773 | 70.00th=[ 7832], 80.00th=[ 7963], 90.00th=[ 8094], 95.00th=[ 8356], 00:10:14.773 | 99.00th=[ 8848], 99.50th=[ 8979], 99.90th=[ 9241], 99.95th=[ 9241], 00:10:14.773 | 99.99th=[ 9241] 00:10:14.773 bw ( KiB/s): min=32768, max=32768, per=53.91%, avg=32768.00, stdev= 0.00, samples=1 00:10:14.773 iops : min= 8192, max= 8192, avg=8192.00, stdev= 0.00, samples=1 00:10:14.773 lat (usec) : 500=0.04% 00:10:14.773 lat (msec) : 4=0.22%, 10=99.74% 00:10:14.773 cpu : usr=3.40%, sys=12.30%, ctx=886, majf=0, minf=14 00:10:14.773 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:10:14.773 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:14.773 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:14.773 issued rwts: total=8192,8209,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:14.773 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:14.773 job3: (groupid=0, jobs=1): err= 0: pid=71044: Wed Nov 6 14:14:42 2024 00:10:14.773 read: IOPS=2544, BW=9.94MiB/s (10.4MB/s)(10.0MiB/1006msec) 00:10:14.773 slat (usec): min=2, max=12970, avg=206.49, stdev=1037.05 00:10:14.773 clat (usec): min=10675, max=41455, avg=25980.23, stdev=5295.89 00:10:14.773 lat (usec): min=10683, max=41466, avg=26186.73, stdev=5256.24 00:10:14.773 clat percentiles (usec): 00:10:14.773 | 1.00th=[11207], 5.00th=[20055], 10.00th=[21365], 20.00th=[22676], 00:10:14.773 | 30.00th=[22938], 40.00th=[22938], 50.00th=[23200], 60.00th=[25035], 00:10:14.773 | 70.00th=[28967], 80.00th=[31327], 90.00th=[34341], 95.00th=[35390], 00:10:14.773 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:10:14.773 | 99.99th=[41681] 00:10:14.773 write: IOPS=2545, BW=9.94MiB/s (10.4MB/s)(10.0MiB/1006msec); 0 zone resets 00:10:14.773 slat (usec): min=4, max=20956, avg=180.05, stdev=993.31 00:10:14.773 clat (usec): min=3129, max=46751, avg=23138.60, stdev=5693.56 00:10:14.773 lat (usec): min=10672, max=46765, avg=23318.65, stdev=5637.33 00:10:14.773 clat percentiles (usec): 00:10:14.773 | 1.00th=[15270], 5.00th=[19006], 10.00th=[19268], 20.00th=[19530], 00:10:14.773 | 30.00th=[19792], 40.00th=[20055], 50.00th=[20579], 60.00th=[21627], 00:10:14.773 | 70.00th=[23725], 80.00th=[25822], 90.00th=[30802], 95.00th=[36963], 00:10:14.773 | 99.00th=[46400], 99.50th=[46400], 99.90th=[46924], 99.95th=[46924], 00:10:14.773 | 99.99th=[46924] 00:10:14.773 bw ( KiB/s): min= 9756, max=10725, per=16.85%, avg=10240.50, stdev=685.19, samples=2 00:10:14.773 iops : min= 2439, max= 2681, avg=2560.00, stdev=171.12, samples=2 00:10:14.773 lat (msec) : 4=0.02%, 20=21.46%, 50=78.52% 00:10:14.773 cpu : usr=1.39%, sys=4.58%, ctx=565, majf=0, minf=15 00:10:14.773 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:10:14.773 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:14.773 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:14.773 issued rwts: total=2560,2561,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:14.773 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:14.773 00:10:14.773 Run status group 0 (all jobs): 00:10:14.773 READ: bw=57.4MiB/s (60.2MB/s), 6162KiB/s-32.0MiB/s (6310kB/s-33.5MB/s), io=58.1MiB (60.9MB), run=1001-1012msec 00:10:14.773 WRITE: bw=59.4MiB/s (62.2MB/s), 8095KiB/s-32.0MiB/s (8289kB/s-33.6MB/s), io=60.1MiB (63.0MB), run=1001-1012msec 00:10:14.773 00:10:14.773 Disk stats (read/write): 00:10:14.773 nvme0n1: ios=2098/2521, merge=0/0, ticks=13248/12786, in_queue=26034, util=90.08% 00:10:14.773 nvme0n2: ios=1585/1759, merge=0/0, ticks=16299/36832, in_queue=53131, util=89.84% 00:10:14.773 nvme0n3: ios=7195/7235, merge=0/0, ticks=13855/12020, in_queue=25875, util=90.06% 00:10:14.773 nvme0n4: ios=2094/2528, merge=0/0, ticks=13110/12922, in_queue=26032, util=91.25% 00:10:14.773 14:14:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:14.773 [global] 00:10:14.773 thread=1 00:10:14.773 invalidate=1 00:10:14.773 rw=randwrite 00:10:14.773 time_based=1 00:10:14.773 runtime=1 00:10:14.773 ioengine=libaio 00:10:14.773 direct=1 00:10:14.773 bs=4096 00:10:14.773 iodepth=128 00:10:14.773 norandommap=0 00:10:14.773 numjobs=1 00:10:14.773 00:10:14.773 verify_dump=1 00:10:14.773 verify_backlog=512 00:10:14.773 verify_state_save=0 00:10:14.773 do_verify=1 00:10:14.773 verify=crc32c-intel 00:10:14.773 [job0] 00:10:14.773 filename=/dev/nvme0n1 00:10:14.773 [job1] 00:10:14.773 filename=/dev/nvme0n2 00:10:14.773 [job2] 00:10:14.773 filename=/dev/nvme0n3 00:10:14.773 [job3] 00:10:14.773 filename=/dev/nvme0n4 00:10:14.773 Could not set queue depth (nvme0n1) 00:10:14.773 Could not set queue depth (nvme0n2) 00:10:14.773 Could not set queue depth (nvme0n3) 00:10:14.773 Could not set queue depth (nvme0n4) 00:10:14.773 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:14.773 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:14.773 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:14.773 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:14.773 fio-3.35 00:10:14.773 Starting 4 threads 00:10:15.706 00:10:15.706 job0: (groupid=0, jobs=1): err= 0: pid=71105: Wed Nov 6 14:14:43 2024 00:10:15.706 read: IOPS=6636, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1003msec) 00:10:15.706 slat (usec): min=4, max=2551, avg=73.25, stdev=298.84 00:10:15.706 clat (usec): min=7262, max=12072, avg=9499.81, stdev=668.47 00:10:15.706 lat (usec): min=7592, max=12081, avg=9573.06, stdev=648.71 00:10:15.706 clat percentiles (usec): 00:10:15.706 | 1.00th=[ 7832], 5.00th=[ 8455], 10.00th=[ 8586], 20.00th=[ 8979], 00:10:15.706 | 30.00th=[ 9241], 40.00th=[ 9372], 50.00th=[ 9503], 60.00th=[ 9634], 00:10:15.706 | 70.00th=[ 9765], 80.00th=[10028], 90.00th=[10421], 95.00th=[10683], 00:10:15.706 | 99.00th=[11076], 99.50th=[11207], 99.90th=[11994], 99.95th=[11994], 00:10:15.706 | 99.99th=[12125] 00:10:15.706 write: IOPS=6907, BW=27.0MiB/s (28.3MB/s)(27.1MiB/1003msec); 0 zone resets 00:10:15.706 slat (usec): min=5, max=2316, avg=69.66, stdev=282.30 00:10:15.706 clat (usec): min=1838, max=11646, avg=9184.87, stdev=854.14 00:10:15.706 lat (usec): min=2236, max=11657, avg=9254.53, stdev=822.28 00:10:15.706 clat percentiles (usec): 00:10:15.706 | 1.00th=[ 6849], 5.00th=[ 7701], 10.00th=[ 8160], 20.00th=[ 8848], 00:10:15.706 | 30.00th=[ 8979], 40.00th=[ 9110], 50.00th=[ 9241], 60.00th=[ 9372], 00:10:15.706 | 70.00th=[ 9503], 80.00th=[ 9634], 90.00th=[10028], 95.00th=[10421], 00:10:15.706 | 99.00th=[10945], 99.50th=[11076], 99.90th=[11600], 99.95th=[11600], 00:10:15.706 | 99.99th=[11600] 00:10:15.706 bw ( KiB/s): min=25736, max=28614, per=37.48%, avg=27175.00, stdev=2035.05, samples=2 00:10:15.706 iops : min= 6434, max= 7153, avg=6793.50, stdev=508.41, samples=2 00:10:15.706 lat (msec) : 2=0.01%, 4=0.23%, 10=85.46%, 20=14.30% 00:10:15.706 cpu : usr=2.89%, sys=11.08%, ctx=806, majf=0, minf=10 00:10:15.707 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:10:15.707 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.707 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:15.707 issued rwts: total=6656,6928,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:15.707 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:15.707 job1: (groupid=0, jobs=1): err= 0: pid=71106: Wed Nov 6 14:14:43 2024 00:10:15.707 read: IOPS=2675, BW=10.5MiB/s (11.0MB/s)(10.5MiB/1005msec) 00:10:15.707 slat (usec): min=5, max=14604, avg=183.01, stdev=1031.38 00:10:15.707 clat (usec): min=2481, max=49811, avg=23003.53, stdev=8374.49 00:10:15.707 lat (usec): min=12953, max=49820, avg=23186.53, stdev=8372.45 00:10:15.707 clat percentiles (usec): 00:10:15.707 | 1.00th=[13304], 5.00th=[16188], 10.00th=[17171], 20.00th=[18220], 00:10:15.707 | 30.00th=[18482], 40.00th=[19006], 50.00th=[19268], 60.00th=[20055], 00:10:15.707 | 70.00th=[21103], 80.00th=[27919], 90.00th=[37487], 95.00th=[42206], 00:10:15.707 | 99.00th=[49546], 99.50th=[49546], 99.90th=[49546], 99.95th=[49546], 00:10:15.707 | 99.99th=[50070] 00:10:15.707 write: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec); 0 zone resets 00:10:15.707 slat (usec): min=7, max=13819, avg=161.02, stdev=914.79 00:10:15.707 clat (usec): min=10468, max=44429, avg=20867.04, stdev=8638.91 00:10:15.707 lat (usec): min=13227, max=44457, avg=21028.06, stdev=8654.20 00:10:15.707 clat percentiles (usec): 00:10:15.707 | 1.00th=[11731], 5.00th=[13435], 10.00th=[13829], 20.00th=[14615], 00:10:15.707 | 30.00th=[14877], 40.00th=[15533], 50.00th=[16188], 60.00th=[17171], 00:10:15.707 | 70.00th=[24249], 80.00th=[29492], 90.00th=[33424], 95.00th=[41157], 00:10:15.707 | 99.00th=[44303], 99.50th=[44303], 99.90th=[44303], 99.95th=[44303], 00:10:15.707 | 99.99th=[44303] 00:10:15.707 bw ( KiB/s): min=12288, max=12288, per=16.95%, avg=12288.00, stdev= 0.00, samples=2 00:10:15.707 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:10:15.707 lat (msec) : 4=0.02%, 20=62.45%, 50=37.53% 00:10:15.707 cpu : usr=1.29%, sys=5.78%, ctx=192, majf=0, minf=15 00:10:15.707 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:10:15.707 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.707 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:15.707 issued rwts: total=2689,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:15.707 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:15.707 job2: (groupid=0, jobs=1): err= 0: pid=71107: Wed Nov 6 14:14:43 2024 00:10:15.707 read: IOPS=5506, BW=21.5MiB/s (22.6MB/s)(21.6MiB/1002msec) 00:10:15.707 slat (usec): min=4, max=2840, avg=89.85, stdev=395.24 00:10:15.707 clat (usec): min=667, max=14377, avg=11483.66, stdev=1177.01 00:10:15.707 lat (usec): min=1798, max=14391, avg=11573.51, stdev=1146.21 00:10:15.707 clat percentiles (usec): 00:10:15.707 | 1.00th=[ 6194], 5.00th=[ 9765], 10.00th=[10421], 20.00th=[10814], 00:10:15.707 | 30.00th=[11338], 40.00th=[11600], 50.00th=[11600], 60.00th=[11731], 00:10:15.707 | 70.00th=[11863], 80.00th=[11994], 90.00th=[12780], 95.00th=[13042], 00:10:15.707 | 99.00th=[13304], 99.50th=[13698], 99.90th=[13829], 99.95th=[13829], 00:10:15.707 | 99.99th=[14353] 00:10:15.707 write: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec); 0 zone resets 00:10:15.707 slat (usec): min=8, max=2548, avg=84.93, stdev=348.09 00:10:15.707 clat (usec): min=8309, max=14362, avg=11219.57, stdev=758.03 00:10:15.707 lat (usec): min=8844, max=14375, avg=11304.50, stdev=692.64 00:10:15.707 clat percentiles (usec): 00:10:15.707 | 1.00th=[ 8848], 5.00th=[ 9372], 10.00th=[10421], 20.00th=[11076], 00:10:15.707 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11338], 60.00th=[11338], 00:10:15.707 | 70.00th=[11469], 80.00th=[11600], 90.00th=[11863], 95.00th=[12256], 00:10:15.707 | 99.00th=[13304], 99.50th=[13435], 99.90th=[13698], 99.95th=[13698], 00:10:15.707 | 99.99th=[14353] 00:10:15.707 bw ( KiB/s): min=21792, max=23264, per=31.07%, avg=22528.00, stdev=1040.86, samples=2 00:10:15.707 iops : min= 5448, max= 5816, avg=5632.00, stdev=260.22, samples=2 00:10:15.707 lat (usec) : 750=0.01% 00:10:15.707 lat (msec) : 2=0.14%, 10=7.15%, 20=92.70% 00:10:15.707 cpu : usr=3.00%, sys=9.69%, ctx=636, majf=0, minf=15 00:10:15.707 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:15.707 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.707 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:15.707 issued rwts: total=5518,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:15.707 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:15.707 job3: (groupid=0, jobs=1): err= 0: pid=71108: Wed Nov 6 14:14:43 2024 00:10:15.707 read: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec) 00:10:15.707 slat (usec): min=5, max=9658, avg=160.50, stdev=846.74 00:10:15.707 clat (usec): min=5687, max=41380, avg=18773.92, stdev=4162.97 00:10:15.707 lat (usec): min=5696, max=41394, avg=18934.42, stdev=4249.26 00:10:15.707 clat percentiles (usec): 00:10:15.707 | 1.00th=[10159], 5.00th=[14222], 10.00th=[15139], 20.00th=[16188], 00:10:15.707 | 30.00th=[16712], 40.00th=[17171], 50.00th=[17957], 60.00th=[18744], 00:10:15.707 | 70.00th=[19530], 80.00th=[20317], 90.00th=[23462], 95.00th=[27132], 00:10:15.707 | 99.00th=[34341], 99.50th=[38011], 99.90th=[41157], 99.95th=[41157], 00:10:15.707 | 99.99th=[41157] 00:10:15.707 write: IOPS=2571, BW=10.0MiB/s (10.5MB/s)(10.1MiB/1005msec); 0 zone resets 00:10:15.707 slat (usec): min=7, max=8396, avg=223.24, stdev=787.89 00:10:15.707 clat (usec): min=2512, max=51936, avg=30479.40, stdev=9459.03 00:10:15.707 lat (usec): min=5232, max=51957, avg=30702.63, stdev=9516.83 00:10:15.707 clat percentiles (usec): 00:10:15.707 | 1.00th=[13042], 5.00th=[15270], 10.00th=[15664], 20.00th=[17695], 00:10:15.707 | 30.00th=[27657], 40.00th=[30016], 50.00th=[32113], 60.00th=[33162], 00:10:15.707 | 70.00th=[35390], 80.00th=[38011], 90.00th=[42206], 95.00th=[44827], 00:10:15.707 | 99.00th=[49546], 99.50th=[50070], 99.90th=[52167], 99.95th=[52167], 00:10:15.707 | 99.99th=[52167] 00:10:15.707 bw ( KiB/s): min= 8776, max=11704, per=14.12%, avg=10240.00, stdev=2070.41, samples=2 00:10:15.707 iops : min= 2194, max= 2926, avg=2560.00, stdev=517.60, samples=2 00:10:15.707 lat (msec) : 4=0.02%, 10=0.62%, 20=48.43%, 50=50.66%, 100=0.27% 00:10:15.707 cpu : usr=1.29%, sys=5.38%, ctx=335, majf=0, minf=7 00:10:15.707 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:10:15.707 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.707 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:15.707 issued rwts: total=2560,2584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:15.707 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:15.707 00:10:15.707 Run status group 0 (all jobs): 00:10:15.707 READ: bw=67.7MiB/s (71.0MB/s), 9.95MiB/s-25.9MiB/s (10.4MB/s-27.2MB/s), io=68.1MiB (71.4MB), run=1002-1005msec 00:10:15.707 WRITE: bw=70.8MiB/s (74.2MB/s), 10.0MiB/s-27.0MiB/s (10.5MB/s-28.3MB/s), io=71.2MiB (74.6MB), run=1002-1005msec 00:10:15.707 00:10:15.707 Disk stats (read/write): 00:10:15.707 nvme0n1: ios=5877/6144, merge=0/0, ticks=13329/12686, in_queue=26015, util=89.68% 00:10:15.707 nvme0n2: ios=2577/2560, merge=0/0, ticks=14034/12580, in_queue=26614, util=90.44% 00:10:15.707 nvme0n3: ios=4724/5120, merge=0/0, ticks=13093/13149, in_queue=26242, util=90.15% 00:10:15.707 nvme0n4: ios=2078/2423, merge=0/0, ticks=18951/34779, in_queue=53730, util=90.74% 00:10:15.707 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:15.965 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=71125 00:10:15.966 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:15.966 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:15.966 [global] 00:10:15.966 thread=1 00:10:15.966 invalidate=1 00:10:15.966 rw=read 00:10:15.966 time_based=1 00:10:15.966 runtime=10 00:10:15.966 ioengine=libaio 00:10:15.966 direct=1 00:10:15.966 bs=4096 00:10:15.966 iodepth=1 00:10:15.966 norandommap=1 00:10:15.966 numjobs=1 00:10:15.966 00:10:15.966 [job0] 00:10:15.966 filename=/dev/nvme0n1 00:10:15.966 [job1] 00:10:15.966 filename=/dev/nvme0n2 00:10:15.966 [job2] 00:10:15.966 filename=/dev/nvme0n3 00:10:15.966 [job3] 00:10:15.966 filename=/dev/nvme0n4 00:10:15.966 Could not set queue depth (nvme0n1) 00:10:15.966 Could not set queue depth (nvme0n2) 00:10:15.966 Could not set queue depth (nvme0n3) 00:10:15.966 Could not set queue depth (nvme0n4) 00:10:15.966 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:15.966 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:15.966 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:15.966 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:15.966 fio-3.35 00:10:15.966 Starting 4 threads 00:10:19.261 14:14:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:19.261 fio: pid=71169, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:19.261 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=46714880, buflen=4096 00:10:19.261 14:14:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:19.261 fio: pid=71168, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:19.261 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=91869184, buflen=4096 00:10:19.261 14:14:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:19.261 14:14:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:19.519 fio: pid=71166, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:19.519 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=19763200, buflen=4096 00:10:19.519 14:14:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:19.519 14:14:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:19.777 fio: pid=71167, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:19.777 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=356352, buflen=4096 00:10:19.777 00:10:19.777 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=71166: Wed Nov 6 14:14:47 2024 00:10:19.777 read: IOPS=6327, BW=24.7MiB/s (25.9MB/s)(82.8MiB/3352msec) 00:10:19.777 slat (usec): min=6, max=16821, avg=12.34, stdev=177.96 00:10:19.777 clat (usec): min=59, max=1165, avg=145.00, stdev=21.92 00:10:19.777 lat (usec): min=111, max=17049, avg=157.34, stdev=180.22 00:10:19.777 clat percentiles (usec): 00:10:19.777 | 1.00th=[ 122], 5.00th=[ 128], 10.00th=[ 131], 20.00th=[ 135], 00:10:19.777 | 30.00th=[ 137], 40.00th=[ 141], 50.00th=[ 143], 60.00th=[ 145], 00:10:19.777 | 70.00th=[ 149], 80.00th=[ 153], 90.00th=[ 161], 95.00th=[ 169], 00:10:19.777 | 99.00th=[ 192], 99.50th=[ 208], 99.90th=[ 363], 99.95th=[ 461], 00:10:19.777 | 99.99th=[ 1057] 00:10:19.777 bw ( KiB/s): min=24040, max=26280, per=32.52%, avg=25408.17, stdev=919.84, samples=6 00:10:19.777 iops : min= 6010, max= 6570, avg=6352.00, stdev=230.00, samples=6 00:10:19.777 lat (usec) : 100=0.01%, 250=99.72%, 500=0.24%, 750=0.01%, 1000=0.01% 00:10:19.777 lat (msec) : 2=0.01% 00:10:19.777 cpu : usr=0.72%, sys=5.04%, ctx=21219, majf=0, minf=1 00:10:19.777 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:19.777 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.777 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.777 issued rwts: total=21210,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:19.777 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:19.777 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=71167: Wed Nov 6 14:14:47 2024 00:10:19.777 read: IOPS=4499, BW=17.6MiB/s (18.4MB/s)(64.3MiB/3661msec) 00:10:19.777 slat (usec): min=3, max=12796, avg=10.92, stdev=197.23 00:10:19.777 clat (usec): min=76, max=3401, avg=210.60, stdev=93.80 00:10:19.777 lat (usec): min=81, max=12958, avg=221.52, stdev=217.88 00:10:19.777 clat percentiles (usec): 00:10:19.777 | 1.00th=[ 83], 5.00th=[ 92], 10.00th=[ 99], 20.00th=[ 112], 00:10:19.777 | 30.00th=[ 176], 40.00th=[ 215], 50.00th=[ 239], 60.00th=[ 247], 00:10:19.777 | 70.00th=[ 255], 80.00th=[ 265], 90.00th=[ 277], 95.00th=[ 289], 00:10:19.777 | 99.00th=[ 359], 99.50th=[ 510], 99.90th=[ 971], 99.95th=[ 1401], 00:10:19.777 | 99.99th=[ 3097] 00:10:19.777 bw ( KiB/s): min=14736, max=30451, per=22.32%, avg=17439.43, stdev=5784.03, samples=7 00:10:19.777 iops : min= 3684, max= 7612, avg=4359.71, stdev=1445.73, samples=7 00:10:19.777 lat (usec) : 100=10.77%, 250=53.49%, 500=35.22%, 750=0.33%, 1000=0.10% 00:10:19.777 lat (msec) : 2=0.06%, 4=0.02% 00:10:19.777 cpu : usr=0.55%, sys=2.84%, ctx=16484, majf=0, minf=2 00:10:19.777 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:19.777 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.777 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.777 issued rwts: total=16472,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:19.777 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:19.777 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=71168: Wed Nov 6 14:14:47 2024 00:10:19.777 read: IOPS=7127, BW=27.8MiB/s (29.2MB/s)(87.6MiB/3147msec) 00:10:19.777 slat (usec): min=3, max=15456, avg= 7.99, stdev=134.41 00:10:19.777 clat (usec): min=94, max=4759, avg=131.77, stdev=92.99 00:10:19.777 lat (usec): min=99, max=15591, avg=139.75, stdev=165.22 00:10:19.777 clat percentiles (usec): 00:10:19.777 | 1.00th=[ 103], 5.00th=[ 109], 10.00th=[ 111], 20.00th=[ 115], 00:10:19.777 | 30.00th=[ 118], 40.00th=[ 121], 50.00th=[ 124], 60.00th=[ 128], 00:10:19.777 | 70.00th=[ 133], 80.00th=[ 141], 90.00th=[ 155], 95.00th=[ 172], 00:10:19.777 | 99.00th=[ 202], 99.50th=[ 217], 99.90th=[ 1004], 99.95th=[ 2966], 00:10:19.777 | 99.99th=[ 3785] 00:10:19.777 bw ( KiB/s): min=26512, max=30192, per=37.20%, avg=29065.33, stdev=1328.31, samples=6 00:10:19.777 iops : min= 6628, max= 7548, avg=7266.33, stdev=332.08, samples=6 00:10:19.777 lat (usec) : 100=0.19%, 250=99.52%, 500=0.16%, 750=0.02%, 1000=0.01% 00:10:19.777 lat (msec) : 2=0.03%, 4=0.06%, 10=0.01% 00:10:19.777 cpu : usr=0.89%, sys=4.07%, ctx=22440, majf=0, minf=1 00:10:19.777 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:19.777 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.777 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.777 issued rwts: total=22430,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:19.777 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:19.777 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=71169: Wed Nov 6 14:14:47 2024 00:10:19.777 read: IOPS=3870, BW=15.1MiB/s (15.9MB/s)(44.6MiB/2947msec) 00:10:19.777 slat (usec): min=4, max=130, avg= 9.72, stdev= 5.95 00:10:19.777 clat (usec): min=117, max=2097, avg=247.62, stdev=58.40 00:10:19.777 lat (usec): min=123, max=2113, avg=257.34, stdev=58.75 00:10:19.777 clat percentiles (usec): 00:10:19.777 | 1.00th=[ 139], 5.00th=[ 182], 10.00th=[ 200], 20.00th=[ 227], 00:10:19.777 | 30.00th=[ 235], 40.00th=[ 241], 50.00th=[ 247], 60.00th=[ 253], 00:10:19.777 | 70.00th=[ 260], 80.00th=[ 269], 90.00th=[ 281], 95.00th=[ 297], 00:10:19.777 | 99.00th=[ 388], 99.50th=[ 627], 99.90th=[ 898], 99.95th=[ 1090], 00:10:19.777 | 99.99th=[ 1745] 00:10:19.777 bw ( KiB/s): min=14704, max=17224, per=19.75%, avg=15432.00, stdev=1017.89, samples=5 00:10:19.777 iops : min= 3676, max= 4306, avg=3858.00, stdev=254.47, samples=5 00:10:19.777 lat (usec) : 250=56.68%, 500=42.64%, 750=0.51%, 1000=0.10% 00:10:19.777 lat (msec) : 2=0.06%, 4=0.01% 00:10:19.777 cpu : usr=0.58%, sys=3.33%, ctx=11406, majf=0, minf=2 00:10:19.777 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:19.777 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.777 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.777 issued rwts: total=11406,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:19.777 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:19.777 00:10:19.777 Run status group 0 (all jobs): 00:10:19.777 READ: bw=76.3MiB/s (80.0MB/s), 15.1MiB/s-27.8MiB/s (15.9MB/s-29.2MB/s), io=279MiB (293MB), run=2947-3661msec 00:10:19.777 00:10:19.777 Disk stats (read/write): 00:10:19.777 nvme0n1: ios=19866/0, merge=0/0, ticks=2929/0, in_queue=2929, util=95.62% 00:10:19.777 nvme0n2: ios=16207/0, merge=0/0, ticks=3437/0, in_queue=3437, util=95.43% 00:10:19.777 nvme0n3: ios=22340/0, merge=0/0, ticks=2918/0, in_queue=2918, util=95.48% 00:10:19.777 nvme0n4: ios=11005/0, merge=0/0, ticks=2757/0, in_queue=2757, util=96.74% 00:10:20.035 14:14:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:20.035 14:14:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:20.292 14:14:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:20.292 14:14:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:20.548 14:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:20.548 14:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:20.806 14:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:20.806 14:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:21.064 14:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:21.064 14:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:21.322 14:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:21.322 14:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 71125 00:10:21.322 14:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:21.322 14:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:21.322 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:21.322 14:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:21.323 14:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:10:21.323 14:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:10:21.323 14:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:21.323 14:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:21.323 14:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:10:21.323 nvmf hotplug test: fio failed as expected 00:10:21.323 14:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:10:21.323 14:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:21.323 14:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:21.323 14:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:21.579 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:21.579 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:21.579 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:21.579 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:21.579 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:21.579 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:21.579 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:21.579 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:21.579 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:21.579 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:21.579 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:21.579 rmmod nvme_tcp 00:10:21.579 rmmod nvme_fabrics 00:10:21.579 rmmod nvme_keyring 00:10:21.579 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:21.579 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:21.579 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:21.579 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 70637 ']' 00:10:21.579 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 70637 00:10:21.579 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 70637 ']' 00:10:21.579 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 70637 00:10:21.579 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:10:21.579 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:21.579 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70637 00:10:21.579 killing process with pid 70637 00:10:21.579 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:21.579 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:21.579 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70637' 00:10:21.579 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 70637 00:10:21.579 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 70637 00:10:22.552 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:22.552 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:22.552 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:22.552 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:22.552 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:10:22.552 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:22.552 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:10:22.552 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:22.552 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:22.552 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:22.552 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:22.552 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:22.552 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:22.552 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:22.552 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:22.553 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:22.553 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:22.553 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:22.553 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:22.553 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:22.553 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:22.553 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:22.553 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:22.553 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:22.553 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:22.553 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:22.553 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:10:22.553 00:10:22.553 real 0m19.113s 00:10:22.553 user 1m11.984s 00:10:22.553 sys 0m6.488s 00:10:22.553 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:22.553 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:22.553 ************************************ 00:10:22.553 END TEST nvmf_fio_target 00:10:22.553 ************************************ 00:10:22.553 14:14:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:22.553 14:14:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:22.553 14:14:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:22.553 14:14:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:22.553 ************************************ 00:10:22.553 START TEST nvmf_bdevio 00:10:22.553 ************************************ 00:10:22.553 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:22.811 * Looking for test storage... 00:10:22.811 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:22.811 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:22.811 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:10:22.811 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:22.811 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:22.811 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:22.811 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:22.811 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:22.811 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:22.811 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:22.811 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:22.811 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:22.811 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:22.811 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:22.811 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:22.811 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:22.811 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:22.811 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:22.811 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:22.811 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:22.811 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:22.811 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:22.811 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:22.811 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:22.811 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:22.811 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:22.811 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:22.811 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:22.811 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:22.811 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:22.811 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:22.811 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:22.811 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:22.811 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:22.811 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:22.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.811 --rc genhtml_branch_coverage=1 00:10:22.811 --rc genhtml_function_coverage=1 00:10:22.811 --rc genhtml_legend=1 00:10:22.811 --rc geninfo_all_blocks=1 00:10:22.811 --rc geninfo_unexecuted_blocks=1 00:10:22.811 00:10:22.811 ' 00:10:22.811 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:22.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.811 --rc genhtml_branch_coverage=1 00:10:22.811 --rc genhtml_function_coverage=1 00:10:22.811 --rc genhtml_legend=1 00:10:22.811 --rc geninfo_all_blocks=1 00:10:22.811 --rc geninfo_unexecuted_blocks=1 00:10:22.811 00:10:22.811 ' 00:10:22.811 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:22.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.811 --rc genhtml_branch_coverage=1 00:10:22.811 --rc genhtml_function_coverage=1 00:10:22.811 --rc genhtml_legend=1 00:10:22.811 --rc geninfo_all_blocks=1 00:10:22.811 --rc geninfo_unexecuted_blocks=1 00:10:22.811 00:10:22.811 ' 00:10:22.811 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:22.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.811 --rc genhtml_branch_coverage=1 00:10:22.811 --rc genhtml_function_coverage=1 00:10:22.811 --rc genhtml_legend=1 00:10:22.811 --rc geninfo_all_blocks=1 00:10:22.811 --rc geninfo_unexecuted_blocks=1 00:10:22.811 00:10:22.811 ' 00:10:22.811 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:22.811 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:22.811 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:22.812 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:22.812 Cannot find device "nvmf_init_br" 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:22.812 Cannot find device "nvmf_init_br2" 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:22.812 Cannot find device "nvmf_tgt_br" 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:22.812 Cannot find device "nvmf_tgt_br2" 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:22.812 Cannot find device "nvmf_init_br" 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:22.812 Cannot find device "nvmf_init_br2" 00:10:22.812 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:10:22.813 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:22.813 Cannot find device "nvmf_tgt_br" 00:10:22.813 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:10:22.813 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:22.813 Cannot find device "nvmf_tgt_br2" 00:10:22.813 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:10:22.813 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:22.813 Cannot find device "nvmf_br" 00:10:22.813 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:10:22.813 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:22.813 Cannot find device "nvmf_init_if" 00:10:22.813 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:10:22.813 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:22.813 Cannot find device "nvmf_init_if2" 00:10:22.813 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:10:22.813 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:22.813 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:22.813 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:10:22.813 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:22.813 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:22.813 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:10:22.813 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:22.813 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:22.813 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:22.813 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:22.813 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:22.813 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:23.073 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:23.073 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:23.073 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:23.073 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:23.073 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:23.073 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:23.073 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:23.073 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:23.073 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:23.073 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:23.073 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:23.073 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:23.073 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:23.073 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:23.073 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:23.073 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:23.073 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:23.073 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:23.073 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:23.073 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:23.073 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:23.073 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:23.073 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:23.073 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:23.073 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:23.073 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:23.073 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:23.073 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:23.073 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:10:23.073 00:10:23.073 --- 10.0.0.3 ping statistics --- 00:10:23.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:23.073 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:10:23.073 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:23.073 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:23.073 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.030 ms 00:10:23.073 00:10:23.073 --- 10.0.0.4 ping statistics --- 00:10:23.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:23.073 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:10:23.073 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:23.073 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:23.073 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:10:23.073 00:10:23.073 --- 10.0.0.1 ping statistics --- 00:10:23.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:23.073 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:10:23.073 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:23.073 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:23.073 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:10:23.073 00:10:23.073 --- 10.0.0.2 ping statistics --- 00:10:23.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:23.073 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:10:23.073 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:23.073 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:10:23.073 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:23.073 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:23.073 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:23.073 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:23.073 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:23.073 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:23.073 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:23.073 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:23.073 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:23.073 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:23.073 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:23.073 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=71549 00:10:23.073 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 71549 00:10:23.073 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 71549 ']' 00:10:23.073 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:23.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:23.073 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:23.073 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:23.073 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:23.073 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:23.073 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:23.073 [2024-11-06 14:14:50.651515] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:10:23.073 [2024-11-06 14:14:50.651634] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:23.331 [2024-11-06 14:14:50.813306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:23.331 [2024-11-06 14:14:50.918659] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:23.331 [2024-11-06 14:14:50.918714] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:23.331 [2024-11-06 14:14:50.918726] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:23.331 [2024-11-06 14:14:50.918733] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:23.331 [2024-11-06 14:14:50.918742] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:23.331 [2024-11-06 14:14:50.920377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:23.331 [2024-11-06 14:14:50.920703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:23.331 [2024-11-06 14:14:50.920838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:23.331 [2024-11-06 14:14:50.920838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:23.895 14:14:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:23.895 14:14:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:10:23.895 14:14:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:23.895 14:14:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:23.895 14:14:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:23.895 14:14:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:23.895 14:14:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:23.895 14:14:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.895 14:14:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:23.895 [2024-11-06 14:14:51.495379] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:23.895 14:14:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.895 14:14:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:23.895 14:14:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.895 14:14:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:24.152 Malloc0 00:10:24.152 14:14:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.152 14:14:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:24.152 14:14:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.152 14:14:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:24.152 14:14:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.152 14:14:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:24.152 14:14:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.152 14:14:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:24.152 14:14:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.152 14:14:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:24.152 14:14:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.152 14:14:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:24.152 [2024-11-06 14:14:51.591164] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:24.152 14:14:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.152 14:14:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:24.152 14:14:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:24.152 14:14:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:10:24.152 14:14:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:10:24.152 14:14:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:24.152 14:14:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:24.152 { 00:10:24.152 "params": { 00:10:24.152 "name": "Nvme$subsystem", 00:10:24.152 "trtype": "$TEST_TRANSPORT", 00:10:24.152 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:24.152 "adrfam": "ipv4", 00:10:24.152 "trsvcid": "$NVMF_PORT", 00:10:24.152 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:24.152 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:24.152 "hdgst": ${hdgst:-false}, 00:10:24.152 "ddgst": ${ddgst:-false} 00:10:24.152 }, 00:10:24.152 "method": "bdev_nvme_attach_controller" 00:10:24.152 } 00:10:24.152 EOF 00:10:24.152 )") 00:10:24.152 14:14:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:10:24.153 14:14:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:10:24.153 14:14:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:10:24.153 14:14:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:24.153 "params": { 00:10:24.153 "name": "Nvme1", 00:10:24.153 "trtype": "tcp", 00:10:24.153 "traddr": "10.0.0.3", 00:10:24.153 "adrfam": "ipv4", 00:10:24.153 "trsvcid": "4420", 00:10:24.153 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:24.153 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:24.153 "hdgst": false, 00:10:24.153 "ddgst": false 00:10:24.153 }, 00:10:24.153 "method": "bdev_nvme_attach_controller" 00:10:24.153 }' 00:10:24.153 [2024-11-06 14:14:51.662376] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:10:24.153 [2024-11-06 14:14:51.662493] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71603 ] 00:10:24.410 [2024-11-06 14:14:51.823587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:24.411 [2024-11-06 14:14:51.931467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:24.411 [2024-11-06 14:14:51.931649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.411 [2024-11-06 14:14:51.931668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:24.693 I/O targets: 00:10:24.693 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:24.693 00:10:24.693 00:10:24.693 CUnit - A unit testing framework for C - Version 2.1-3 00:10:24.693 http://cunit.sourceforge.net/ 00:10:24.693 00:10:24.693 00:10:24.693 Suite: bdevio tests on: Nvme1n1 00:10:24.693 Test: blockdev write read block ...passed 00:10:24.950 Test: blockdev write zeroes read block ...passed 00:10:24.950 Test: blockdev write zeroes read no split ...passed 00:10:24.950 Test: blockdev write zeroes read split ...passed 00:10:24.950 Test: blockdev write zeroes read split partial ...passed 00:10:24.950 Test: blockdev reset ...[2024-11-06 14:14:52.400686] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:10:24.950 [2024-11-06 14:14:52.400811] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b280 (9): Bad file descriptor 00:10:24.950 passed 00:10:24.950 Test: blockdev write read 8 blocks ...[2024-11-06 14:14:52.412188] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:10:24.950 passed 00:10:24.950 Test: blockdev write read size > 128k ...passed 00:10:24.950 Test: blockdev write read invalid size ...passed 00:10:24.951 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:24.951 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:24.951 Test: blockdev write read max offset ...passed 00:10:24.951 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:24.951 Test: blockdev writev readv 8 blocks ...passed 00:10:24.951 Test: blockdev writev readv 30 x 1block ...passed 00:10:24.951 Test: blockdev writev readv block ...passed 00:10:24.951 Test: blockdev writev readv size > 128k ...passed 00:10:25.209 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:25.209 Test: blockdev comparev and writev ...[2024-11-06 14:14:52.586529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:25.209 [2024-11-06 14:14:52.586592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:25.209 [2024-11-06 14:14:52.586613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:25.209 [2024-11-06 14:14:52.586623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:25.209 [2024-11-06 14:14:52.586891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:25.209 [2024-11-06 14:14:52.586904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:25.209 [2024-11-06 14:14:52.586920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:25.209 [2024-11-06 14:14:52.586929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:25.209 [2024-11-06 14:14:52.587196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:25.209 [2024-11-06 14:14:52.587213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:25.209 [2024-11-06 14:14:52.587229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:25.209 [2024-11-06 14:14:52.587238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:25.209 [2024-11-06 14:14:52.587488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:25.209 [2024-11-06 14:14:52.587502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:25.209 [2024-11-06 14:14:52.587521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:25.209 [2024-11-06 14:14:52.587531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:25.209 passed 00:10:25.209 Test: blockdev nvme passthru rw ...passed 00:10:25.209 Test: blockdev nvme passthru vendor specific ...[2024-11-06 14:14:52.670299] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:25.209 [2024-11-06 14:14:52.670345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:25.209 passed 00:10:25.209 Test: blockdev nvme admin passthru ...[2024-11-06 14:14:52.670590] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:25.209 [2024-11-06 14:14:52.670610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:25.209 [2024-11-06 14:14:52.670757] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:25.209 [2024-11-06 14:14:52.670774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:25.209 [2024-11-06 14:14:52.670868] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:25.209 [2024-11-06 14:14:52.670879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:25.209 passed 00:10:25.209 Test: blockdev copy ...passed 00:10:25.209 00:10:25.210 Run Summary: Type Total Ran Passed Failed Inactive 00:10:25.210 suites 1 1 n/a 0 0 00:10:25.210 tests 23 23 23 0 0 00:10:25.210 asserts 152 152 152 0 n/a 00:10:25.210 00:10:25.210 Elapsed time = 0.989 seconds 00:10:26.143 14:14:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:26.143 14:14:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.143 14:14:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:26.143 14:14:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.143 14:14:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:26.143 14:14:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:26.143 14:14:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:26.143 14:14:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:26.143 14:14:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:26.143 14:14:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:26.143 14:14:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:26.143 14:14:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:26.143 rmmod nvme_tcp 00:10:26.143 rmmod nvme_fabrics 00:10:26.143 rmmod nvme_keyring 00:10:26.143 14:14:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:26.143 14:14:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:26.143 14:14:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:26.143 14:14:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 71549 ']' 00:10:26.143 14:14:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 71549 00:10:26.143 14:14:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 71549 ']' 00:10:26.143 14:14:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 71549 00:10:26.143 14:14:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:10:26.143 14:14:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:26.143 14:14:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71549 00:10:26.143 killing process with pid 71549 00:10:26.143 14:14:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:10:26.143 14:14:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:10:26.143 14:14:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71549' 00:10:26.143 14:14:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 71549 00:10:26.143 14:14:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 71549 00:10:27.077 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:27.077 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:27.077 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:27.077 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:27.077 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:10:27.077 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:10:27.077 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:27.077 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:27.077 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:27.077 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:27.077 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:27.077 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:27.077 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:27.077 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:27.077 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:27.077 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:27.077 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:27.077 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:27.077 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:27.077 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:27.077 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:27.077 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:27.077 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:27.077 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:27.077 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:27.077 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:27.077 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:10:27.077 00:10:27.077 real 0m4.576s 00:10:27.077 user 0m17.590s 00:10:27.077 sys 0m0.825s 00:10:27.077 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:27.077 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:27.077 ************************************ 00:10:27.077 END TEST nvmf_bdevio 00:10:27.077 ************************************ 00:10:27.335 14:14:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:27.335 00:10:27.335 real 3m34.977s 00:10:27.335 user 11m17.839s 00:10:27.335 sys 0m48.489s 00:10:27.335 14:14:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:27.335 ************************************ 00:10:27.335 14:14:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:27.335 END TEST nvmf_target_core 00:10:27.335 ************************************ 00:10:27.335 14:14:54 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:27.335 14:14:54 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:27.335 14:14:54 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:27.335 14:14:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:27.335 ************************************ 00:10:27.335 START TEST nvmf_target_extra 00:10:27.335 ************************************ 00:10:27.335 14:14:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:27.335 * Looking for test storage... 00:10:27.335 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:10:27.335 14:14:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:27.335 14:14:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lcov --version 00:10:27.335 14:14:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:27.335 14:14:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:27.335 14:14:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:27.335 14:14:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:27.335 14:14:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:27.335 14:14:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:27.335 14:14:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:27.335 14:14:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:27.335 14:14:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:27.335 14:14:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:27.335 14:14:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:27.335 14:14:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:27.335 14:14:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:27.335 14:14:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:27.335 14:14:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:27.335 14:14:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:27.335 14:14:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:27.335 14:14:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:27.335 14:14:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:27.335 14:14:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:27.335 14:14:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:27.335 14:14:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:27.335 14:14:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:27.335 14:14:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:27.335 14:14:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:27.335 14:14:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:27.335 14:14:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:27.335 14:14:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:27.335 14:14:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:27.335 14:14:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:27.335 14:14:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:27.335 14:14:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:27.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.335 --rc genhtml_branch_coverage=1 00:10:27.335 --rc genhtml_function_coverage=1 00:10:27.335 --rc genhtml_legend=1 00:10:27.335 --rc geninfo_all_blocks=1 00:10:27.335 --rc geninfo_unexecuted_blocks=1 00:10:27.335 00:10:27.335 ' 00:10:27.335 14:14:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:27.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.335 --rc genhtml_branch_coverage=1 00:10:27.335 --rc genhtml_function_coverage=1 00:10:27.335 --rc genhtml_legend=1 00:10:27.335 --rc geninfo_all_blocks=1 00:10:27.335 --rc geninfo_unexecuted_blocks=1 00:10:27.335 00:10:27.335 ' 00:10:27.335 14:14:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:27.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.335 --rc genhtml_branch_coverage=1 00:10:27.335 --rc genhtml_function_coverage=1 00:10:27.335 --rc genhtml_legend=1 00:10:27.335 --rc geninfo_all_blocks=1 00:10:27.335 --rc geninfo_unexecuted_blocks=1 00:10:27.335 00:10:27.335 ' 00:10:27.335 14:14:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:27.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.335 --rc genhtml_branch_coverage=1 00:10:27.335 --rc genhtml_function_coverage=1 00:10:27.335 --rc genhtml_legend=1 00:10:27.335 --rc geninfo_all_blocks=1 00:10:27.335 --rc geninfo_unexecuted_blocks=1 00:10:27.335 00:10:27.335 ' 00:10:27.335 14:14:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:27.335 14:14:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:27.335 14:14:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:27.335 14:14:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:27.335 14:14:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:27.335 14:14:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:27.335 14:14:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:27.335 14:14:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:27.335 14:14:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:27.335 14:14:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:27.335 14:14:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:27.335 14:14:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:27.335 14:14:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:10:27.335 14:14:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:10:27.335 14:14:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:27.335 14:14:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:27.335 14:14:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:27.335 14:14:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:27.335 14:14:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:27.335 14:14:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:27.335 14:14:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:27.335 14:14:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:27.335 14:14:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:27.335 14:14:54 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.335 14:14:54 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.335 14:14:54 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.335 14:14:54 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:27.336 14:14:54 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.336 14:14:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:27.336 14:14:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:27.336 14:14:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:27.336 14:14:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:27.336 14:14:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:27.336 14:14:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:27.336 14:14:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:27.336 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:27.336 14:14:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:27.336 14:14:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:27.336 14:14:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:27.336 14:14:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:27.336 14:14:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:27.336 14:14:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:27.336 14:14:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:27.336 14:14:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:27.336 14:14:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:27.336 14:14:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:27.336 ************************************ 00:10:27.336 START TEST nvmf_example 00:10:27.336 ************************************ 00:10:27.336 14:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:27.594 * Looking for test storage... 00:10:27.594 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:27.594 14:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:27.595 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lcov --version 00:10:27.595 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:27.595 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:27.595 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:27.595 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:27.595 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:27.595 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:27.595 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:27.595 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:27.595 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:27.595 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:27.595 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:27.595 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:27.595 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:27.595 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:27.595 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:27.595 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:27.595 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:27.595 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:27.595 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:27.595 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:27.595 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:27.595 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:27.595 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:27.595 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:27.595 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:27.595 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:27.595 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:27.595 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:27.595 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:27.595 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:27.595 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:27.595 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:27.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.595 --rc genhtml_branch_coverage=1 00:10:27.595 --rc genhtml_function_coverage=1 00:10:27.595 --rc genhtml_legend=1 00:10:27.595 --rc geninfo_all_blocks=1 00:10:27.595 --rc geninfo_unexecuted_blocks=1 00:10:27.595 00:10:27.595 ' 00:10:27.595 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:27.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.595 --rc genhtml_branch_coverage=1 00:10:27.595 --rc genhtml_function_coverage=1 00:10:27.595 --rc genhtml_legend=1 00:10:27.595 --rc geninfo_all_blocks=1 00:10:27.595 --rc geninfo_unexecuted_blocks=1 00:10:27.595 00:10:27.595 ' 00:10:27.595 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:27.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.595 --rc genhtml_branch_coverage=1 00:10:27.595 --rc genhtml_function_coverage=1 00:10:27.595 --rc genhtml_legend=1 00:10:27.595 --rc geninfo_all_blocks=1 00:10:27.595 --rc geninfo_unexecuted_blocks=1 00:10:27.595 00:10:27.595 ' 00:10:27.595 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:27.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.595 --rc genhtml_branch_coverage=1 00:10:27.595 --rc genhtml_function_coverage=1 00:10:27.595 --rc genhtml_legend=1 00:10:27.595 --rc geninfo_all_blocks=1 00:10:27.595 --rc geninfo_unexecuted_blocks=1 00:10:27.595 00:10:27.595 ' 00:10:27.595 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:27.595 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:27.595 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:27.595 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:27.595 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:27.595 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:27.595 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:27.595 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:27.595 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:27.595 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:27.595 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:27.595 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:27.595 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:10:27.595 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:10:27.595 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:27.595 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:27.595 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:27.595 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:27.595 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:27.595 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:27.595 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:27.595 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:27.595 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:27.595 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.595 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.595 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.595 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:27.595 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.595 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:27.595 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:27.595 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:27.595 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:27.595 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:27.595 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:27.595 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:27.595 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:27.595 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:27.595 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:27.595 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:27.595 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:27.595 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:27.596 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:27.596 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:27.596 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:27.596 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:27.596 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:27.596 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:27.596 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:27.596 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:27.596 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:27.596 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:27.596 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:27.596 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:27.596 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:27.596 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:27.596 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:27.596 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:27.596 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:27.596 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:27.596 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:27.596 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:27.596 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:27.596 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:27.596 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:27.596 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:27.596 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:27.596 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:27.596 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:27.596 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:27.596 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:27.596 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:27.596 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:27.596 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:27.596 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:27.596 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:27.596 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:27.596 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:27.596 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:27.596 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:27.596 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:27.596 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:27.596 Cannot find device "nvmf_init_br" 00:10:27.596 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@162 -- # true 00:10:27.596 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:27.596 Cannot find device "nvmf_init_br2" 00:10:27.596 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@163 -- # true 00:10:27.596 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:27.596 Cannot find device "nvmf_tgt_br" 00:10:27.596 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@164 -- # true 00:10:27.596 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:27.596 Cannot find device "nvmf_tgt_br2" 00:10:27.596 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@165 -- # true 00:10:27.596 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:27.596 Cannot find device "nvmf_init_br" 00:10:27.596 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@166 -- # true 00:10:27.596 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:27.596 Cannot find device "nvmf_init_br2" 00:10:27.596 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@167 -- # true 00:10:27.596 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:27.596 Cannot find device "nvmf_tgt_br" 00:10:27.596 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@168 -- # true 00:10:27.596 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:27.596 Cannot find device "nvmf_tgt_br2" 00:10:27.596 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@169 -- # true 00:10:27.596 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:27.596 Cannot find device "nvmf_br" 00:10:27.596 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@170 -- # true 00:10:27.596 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:27.596 Cannot find device "nvmf_init_if" 00:10:27.596 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@171 -- # true 00:10:27.596 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:27.596 Cannot find device "nvmf_init_if2" 00:10:27.596 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@172 -- # true 00:10:27.596 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:27.596 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:27.596 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@173 -- # true 00:10:27.596 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:27.596 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:27.596 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@174 -- # true 00:10:27.596 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:27.596 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:27.596 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:27.596 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:27.596 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:27.854 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:27.854 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:27.854 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:27.854 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:27.854 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:27.854 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:27.854 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:27.854 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:27.854 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:27.854 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:27.854 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:27.854 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:27.854 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:27.854 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:27.854 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:27.854 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:27.854 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:27.854 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:27.854 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:27.854 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:27.854 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:27.854 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:27.854 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:27.854 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:27.854 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:27.854 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:27.854 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:27.854 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:27.854 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:27.854 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:10:27.854 00:10:27.854 --- 10.0.0.3 ping statistics --- 00:10:27.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:27.854 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:10:27.854 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:27.854 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:27.854 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.035 ms 00:10:27.854 00:10:27.854 --- 10.0.0.4 ping statistics --- 00:10:27.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:27.854 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:10:27.854 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:27.854 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:27.854 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:10:27.854 00:10:27.854 --- 10.0.0.1 ping statistics --- 00:10:27.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:27.854 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:10:27.854 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:27.854 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:27.854 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:10:27.854 00:10:27.854 --- 10.0.0.2 ping statistics --- 00:10:27.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:27.854 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:10:27.854 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:27.854 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@461 -- # return 0 00:10:27.854 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:27.854 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:27.854 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:27.854 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:27.854 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:27.854 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:27.854 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:27.854 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:27.854 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:27.854 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:27.854 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:27.854 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:27.854 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:27.854 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=71918 00:10:27.854 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:27.854 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:27.854 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 71918 00:10:27.854 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@833 -- # '[' -z 71918 ']' 00:10:27.854 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:27.854 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:27.854 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:27.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:27.854 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:27.854 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:28.788 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:28.788 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@866 -- # return 0 00:10:28.788 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:28.788 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:28.788 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:28.788 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:28.788 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.788 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:28.788 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.788 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:28.788 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.788 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:28.788 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.788 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:28.788 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:28.788 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.788 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:28.788 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.788 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:28.788 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:28.788 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.788 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:28.788 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.788 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:28.788 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.788 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:28.788 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.788 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:10:28.788 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:41.044 Initializing NVMe Controllers 00:10:41.044 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:10:41.044 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:41.044 Initialization complete. Launching workers. 00:10:41.044 ======================================================== 00:10:41.044 Latency(us) 00:10:41.044 Device Information : IOPS MiB/s Average min max 00:10:41.044 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16724.01 65.33 3826.34 747.69 21129.37 00:10:41.044 ======================================================== 00:10:41.044 Total : 16724.01 65.33 3826.34 747.69 21129.37 00:10:41.044 00:10:41.044 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:41.044 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:41.044 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:41.044 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:41.044 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:41.044 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:41.044 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:41.044 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:41.044 rmmod nvme_tcp 00:10:41.044 rmmod nvme_fabrics 00:10:41.044 rmmod nvme_keyring 00:10:41.044 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:41.044 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:41.044 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:41.044 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 71918 ']' 00:10:41.044 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 71918 00:10:41.044 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@952 -- # '[' -z 71918 ']' 00:10:41.044 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # kill -0 71918 00:10:41.044 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # uname 00:10:41.044 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:41.044 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71918 00:10:41.044 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # process_name=nvmf 00:10:41.044 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@962 -- # '[' nvmf = sudo ']' 00:10:41.044 killing process with pid 71918 00:10:41.044 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71918' 00:10:41.044 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@971 -- # kill 71918 00:10:41.044 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@976 -- # wait 71918 00:10:41.044 nvmf threads initialize successfully 00:10:41.044 bdev subsystem init successfully 00:10:41.044 created a nvmf target service 00:10:41.044 create targets's poll groups done 00:10:41.044 all subsystems of target started 00:10:41.044 nvmf target is running 00:10:41.044 all subsystems of target stopped 00:10:41.044 destroy targets's poll groups done 00:10:41.044 destroyed the nvmf target service 00:10:41.044 bdev subsystem finish successfully 00:10:41.044 nvmf threads destroy successfully 00:10:41.044 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:41.044 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:41.044 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:41.044 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:10:41.044 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:10:41.044 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:41.044 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:10:41.044 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:41.044 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:41.044 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:41.044 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:41.044 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:41.044 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:41.044 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:41.044 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:41.044 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:41.044 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:41.044 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:41.044 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:41.044 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:41.044 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:41.044 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:41.044 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:41.044 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:41.044 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:41.044 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:41.044 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@300 -- # return 0 00:10:41.044 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:41.044 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:41.044 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:41.044 ************************************ 00:10:41.044 END TEST nvmf_example 00:10:41.044 ************************************ 00:10:41.044 00:10:41.044 real 0m12.824s 00:10:41.044 user 0m45.641s 00:10:41.044 sys 0m1.657s 00:10:41.044 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:41.044 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:41.044 14:15:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:41.044 14:15:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:41.044 14:15:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:41.044 14:15:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:41.044 ************************************ 00:10:41.044 START TEST nvmf_filesystem 00:10:41.044 ************************************ 00:10:41.044 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:41.044 * Looking for test storage... 00:10:41.044 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:41.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.045 --rc genhtml_branch_coverage=1 00:10:41.045 --rc genhtml_function_coverage=1 00:10:41.045 --rc genhtml_legend=1 00:10:41.045 --rc geninfo_all_blocks=1 00:10:41.045 --rc geninfo_unexecuted_blocks=1 00:10:41.045 00:10:41.045 ' 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:41.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.045 --rc genhtml_branch_coverage=1 00:10:41.045 --rc genhtml_function_coverage=1 00:10:41.045 --rc genhtml_legend=1 00:10:41.045 --rc geninfo_all_blocks=1 00:10:41.045 --rc geninfo_unexecuted_blocks=1 00:10:41.045 00:10:41.045 ' 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:41.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.045 --rc genhtml_branch_coverage=1 00:10:41.045 --rc genhtml_function_coverage=1 00:10:41.045 --rc genhtml_legend=1 00:10:41.045 --rc geninfo_all_blocks=1 00:10:41.045 --rc geninfo_unexecuted_blocks=1 00:10:41.045 00:10:41.045 ' 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:41.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.045 --rc genhtml_branch_coverage=1 00:10:41.045 --rc genhtml_function_coverage=1 00:10:41.045 --rc genhtml_legend=1 00:10:41.045 --rc geninfo_all_blocks=1 00:10:41.045 --rc geninfo_unexecuted_blocks=1 00:10:41.045 00:10:41.045 ' 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:10:41.045 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:41.046 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:10:41.046 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:10:41.046 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:10:41.046 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:10:41.046 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:41.046 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:10:41.046 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:41.046 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:10:41.046 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:10:41.046 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:10:41.046 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:10:41.046 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:10:41.046 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:10:41.046 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:10:41.046 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:10:41.046 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:10:41.046 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:41.046 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:10:41.046 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=y 00:10:41.046 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:10:41.046 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:10:41.046 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:10:41.046 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:10:41.046 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:10:41.046 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:10:41.046 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:10:41.046 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:10:41.046 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:41.046 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:10:41.046 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=y 00:10:41.046 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:10:41.046 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:10:41.046 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:10:41.046 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:10:41.046 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:10:41.046 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:10:41.046 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:10:41.046 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:10:41.046 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:10:41.046 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:41.046 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:10:41.046 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:10:41.046 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:10:41.046 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:10:41.046 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:10:41.046 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:10:41.046 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:10:41.046 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:10:41.046 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:10:41.046 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:10:41.046 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:10:41.046 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:41.046 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:41.046 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:41.046 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:41.046 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:41.046 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:41.046 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:10:41.046 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:41.046 #define SPDK_CONFIG_H 00:10:41.046 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:41.046 #define SPDK_CONFIG_APPS 1 00:10:41.046 #define SPDK_CONFIG_ARCH native 00:10:41.046 #define SPDK_CONFIG_ASAN 1 00:10:41.046 #define SPDK_CONFIG_AVAHI 1 00:10:41.046 #undef SPDK_CONFIG_CET 00:10:41.046 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:41.046 #define SPDK_CONFIG_COVERAGE 1 00:10:41.046 #define SPDK_CONFIG_CROSS_PREFIX 00:10:41.046 #undef SPDK_CONFIG_CRYPTO 00:10:41.046 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:41.046 #undef SPDK_CONFIG_CUSTOMOCF 00:10:41.046 #undef SPDK_CONFIG_DAOS 00:10:41.046 #define SPDK_CONFIG_DAOS_DIR 00:10:41.046 #define SPDK_CONFIG_DEBUG 1 00:10:41.046 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:41.046 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:10:41.046 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:41.046 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:41.046 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:41.046 #undef SPDK_CONFIG_DPDK_UADK 00:10:41.046 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:10:41.046 #define SPDK_CONFIG_EXAMPLES 1 00:10:41.046 #undef SPDK_CONFIG_FC 00:10:41.046 #define SPDK_CONFIG_FC_PATH 00:10:41.046 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:41.046 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:41.046 #define SPDK_CONFIG_FSDEV 1 00:10:41.046 #undef SPDK_CONFIG_FUSE 00:10:41.046 #undef SPDK_CONFIG_FUZZER 00:10:41.046 #define SPDK_CONFIG_FUZZER_LIB 00:10:41.046 #define SPDK_CONFIG_GOLANG 1 00:10:41.046 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:41.046 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:41.046 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:41.046 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:41.046 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:41.046 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:41.046 #undef SPDK_CONFIG_HAVE_LZ4 00:10:41.046 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:41.046 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:41.046 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:41.046 #define SPDK_CONFIG_IDXD 1 00:10:41.046 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:41.046 #undef SPDK_CONFIG_IPSEC_MB 00:10:41.046 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:41.046 #define SPDK_CONFIG_ISAL 1 00:10:41.046 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:41.046 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:41.046 #define SPDK_CONFIG_LIBDIR 00:10:41.046 #undef SPDK_CONFIG_LTO 00:10:41.046 #define SPDK_CONFIG_MAX_LCORES 128 00:10:41.046 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:10:41.046 #define SPDK_CONFIG_NVME_CUSE 1 00:10:41.046 #undef SPDK_CONFIG_OCF 00:10:41.046 #define SPDK_CONFIG_OCF_PATH 00:10:41.046 #define SPDK_CONFIG_OPENSSL_PATH 00:10:41.046 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:41.046 #define SPDK_CONFIG_PGO_DIR 00:10:41.046 #undef SPDK_CONFIG_PGO_USE 00:10:41.046 #define SPDK_CONFIG_PREFIX /usr/local 00:10:41.046 #undef SPDK_CONFIG_RAID5F 00:10:41.046 #undef SPDK_CONFIG_RBD 00:10:41.046 #define SPDK_CONFIG_RDMA 1 00:10:41.046 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:41.046 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:41.046 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:41.046 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:41.046 #define SPDK_CONFIG_SHARED 1 00:10:41.046 #undef SPDK_CONFIG_SMA 00:10:41.046 #define SPDK_CONFIG_TESTS 1 00:10:41.046 #undef SPDK_CONFIG_TSAN 00:10:41.046 #define SPDK_CONFIG_UBLK 1 00:10:41.046 #define SPDK_CONFIG_UBSAN 1 00:10:41.046 #undef SPDK_CONFIG_UNIT_TESTS 00:10:41.046 #undef SPDK_CONFIG_URING 00:10:41.046 #define SPDK_CONFIG_URING_PATH 00:10:41.046 #undef SPDK_CONFIG_URING_ZNS 00:10:41.046 #define SPDK_CONFIG_USDT 1 00:10:41.046 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:41.046 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:41.046 #define SPDK_CONFIG_VFIO_USER 1 00:10:41.046 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:41.046 #define SPDK_CONFIG_VHOST 1 00:10:41.046 #define SPDK_CONFIG_VIRTIO 1 00:10:41.046 #undef SPDK_CONFIG_VTUNE 00:10:41.046 #define SPDK_CONFIG_VTUNE_DIR 00:10:41.046 #define SPDK_CONFIG_WERROR 1 00:10:41.046 #define SPDK_CONFIG_WPDK_DIR 00:10:41.046 #undef SPDK_CONFIG_XNVME 00:10:41.046 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:41.046 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:41.046 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:41.046 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:41.046 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 0 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:41.047 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:41.048 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:41.048 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:41.048 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:10:41.048 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:41.048 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:41.048 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:41.048 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:10:41.048 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:41.048 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:41.048 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:41.048 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:41.048 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:41.048 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:41.048 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:41.048 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:41.048 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:41.048 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:41.048 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:41.048 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:41.048 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:41.048 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:10:41.048 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:41.048 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:41.048 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:41.048 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:41.048 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:41.048 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 1 00:10:41.048 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:41.048 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:41.048 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:41.048 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:41.048 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:41.048 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:41.048 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:41.048 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : 00:10:41.048 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:41.048 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:41.048 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:41.048 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:41.048 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:41.048 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:41.048 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:41.048 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:41.048 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:41.048 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:41.048 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:41.048 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:41.048 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:41.048 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:41.048 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:41.048 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 1 00:10:41.048 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:41.048 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 1 00:10:41.048 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:41.048 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:41.048 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:41.048 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:10:41.048 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:10:41.048 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:10:41.048 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:10:41.048 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:10:41.048 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:10:41.048 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:10:41.048 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:10:41.048 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:41.048 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:41.048 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:10:41.048 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:10:41.048 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:41.048 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:10:41.048 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:41.048 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j10 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 72198 ]] 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 72198 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.SLg7S6 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.SLg7S6/tests/target /tmp/spdk.SLg7S6 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/vda5 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=btrfs 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=13863698432 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=20314062848 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5706317824 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=devtmpfs 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=4194304 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=4194304 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=6254026752 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=6265393152 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=11366400 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=2486018048 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=2506158080 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=20140032 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/vda5 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=btrfs 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=13863698432 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=20314062848 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5706317824 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:41.049 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=6265253888 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=6265393152 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=139264 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/vda2 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext4 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=840085504 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=1012768768 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=103477248 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/vda3 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=vfat 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=91617280 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=104607744 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12990464 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=1253064704 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=1253076992 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt/output 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=fuse.sshfs 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=91008872448 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=105088212992 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=8693907456 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:10:41.050 * Looking for test storage... 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/home 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=13863698432 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ btrfs == tmpfs ]] 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ btrfs == ramfs ]] 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ /home == / ]] 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:41.050 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:41.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.050 --rc genhtml_branch_coverage=1 00:10:41.050 --rc genhtml_function_coverage=1 00:10:41.050 --rc genhtml_legend=1 00:10:41.050 --rc geninfo_all_blocks=1 00:10:41.050 --rc geninfo_unexecuted_blocks=1 00:10:41.050 00:10:41.050 ' 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:41.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.050 --rc genhtml_branch_coverage=1 00:10:41.050 --rc genhtml_function_coverage=1 00:10:41.050 --rc genhtml_legend=1 00:10:41.050 --rc geninfo_all_blocks=1 00:10:41.050 --rc geninfo_unexecuted_blocks=1 00:10:41.050 00:10:41.050 ' 00:10:41.050 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:41.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.051 --rc genhtml_branch_coverage=1 00:10:41.051 --rc genhtml_function_coverage=1 00:10:41.051 --rc genhtml_legend=1 00:10:41.051 --rc geninfo_all_blocks=1 00:10:41.051 --rc geninfo_unexecuted_blocks=1 00:10:41.051 00:10:41.051 ' 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:41.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.051 --rc genhtml_branch_coverage=1 00:10:41.051 --rc genhtml_function_coverage=1 00:10:41.051 --rc genhtml_legend=1 00:10:41.051 --rc geninfo_all_blocks=1 00:10:41.051 --rc geninfo_unexecuted_blocks=1 00:10:41.051 00:10:41.051 ' 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:41.051 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:41.051 Cannot find device "nvmf_init_br" 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@162 -- # true 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:41.051 Cannot find device "nvmf_init_br2" 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@163 -- # true 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:41.051 Cannot find device "nvmf_tgt_br" 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@164 -- # true 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:41.051 Cannot find device "nvmf_tgt_br2" 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@165 -- # true 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:41.051 Cannot find device "nvmf_init_br" 00:10:41.051 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@166 -- # true 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:41.052 Cannot find device "nvmf_init_br2" 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@167 -- # true 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:41.052 Cannot find device "nvmf_tgt_br" 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@168 -- # true 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:41.052 Cannot find device "nvmf_tgt_br2" 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@169 -- # true 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:41.052 Cannot find device "nvmf_br" 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@170 -- # true 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:41.052 Cannot find device "nvmf_init_if" 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@171 -- # true 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:41.052 Cannot find device "nvmf_init_if2" 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@172 -- # true 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:41.052 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@173 -- # true 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:41.052 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@174 -- # true 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:41.052 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:41.052 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:10:41.052 00:10:41.052 --- 10.0.0.3 ping statistics --- 00:10:41.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.052 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:41.052 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:41.052 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:10:41.052 00:10:41.052 --- 10.0.0.4 ping statistics --- 00:10:41.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.052 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:41.052 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:41.052 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:10:41.052 00:10:41.052 --- 10.0.0.1 ping statistics --- 00:10:41.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.052 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:41.052 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:41.052 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.039 ms 00:10:41.052 00:10:41.052 --- 10.0.0.2 ping statistics --- 00:10:41.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.052 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@461 -- # return 0 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:41.052 ************************************ 00:10:41.052 START TEST nvmf_filesystem_no_in_capsule 00:10:41.052 ************************************ 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 0 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=72383 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 72383 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 72383 ']' 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:41.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:41.052 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:41.053 [2024-11-06 14:15:08.579393] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:10:41.053 [2024-11-06 14:15:08.579504] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:41.313 [2024-11-06 14:15:08.742974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:41.313 [2024-11-06 14:15:08.847914] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:41.313 [2024-11-06 14:15:08.847992] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:41.313 [2024-11-06 14:15:08.848005] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:41.313 [2024-11-06 14:15:08.848013] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:41.313 [2024-11-06 14:15:08.848022] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:41.313 [2024-11-06 14:15:08.849383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:41.313 [2024-11-06 14:15:08.849487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:41.313 [2024-11-06 14:15:08.849733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:41.313 [2024-11-06 14:15:08.849784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.878 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:41.878 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:10:41.878 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:41.878 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:41.878 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:41.878 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:41.878 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:41.878 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:41.878 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.878 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:41.878 [2024-11-06 14:15:09.457304] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:41.878 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.878 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:41.878 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.878 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:42.448 Malloc1 00:10:42.448 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.448 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:42.448 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.448 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:42.448 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.448 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:42.448 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.448 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:42.448 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.448 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:42.448 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.448 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:42.448 [2024-11-06 14:15:09.853332] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:42.448 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.448 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:42.448 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:10:42.448 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:10:42.448 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:10:42.448 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:10:42.448 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:42.448 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.448 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:42.448 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.448 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:10:42.448 { 00:10:42.448 "aliases": [ 00:10:42.448 "7c348341-c023-4b29-ab1c-979fdb2906f3" 00:10:42.448 ], 00:10:42.448 "assigned_rate_limits": { 00:10:42.448 "r_mbytes_per_sec": 0, 00:10:42.448 "rw_ios_per_sec": 0, 00:10:42.448 "rw_mbytes_per_sec": 0, 00:10:42.448 "w_mbytes_per_sec": 0 00:10:42.448 }, 00:10:42.448 "block_size": 512, 00:10:42.448 "claim_type": "exclusive_write", 00:10:42.448 "claimed": true, 00:10:42.448 "driver_specific": {}, 00:10:42.448 "memory_domains": [ 00:10:42.448 { 00:10:42.448 "dma_device_id": "system", 00:10:42.448 "dma_device_type": 1 00:10:42.448 }, 00:10:42.448 { 00:10:42.448 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.448 "dma_device_type": 2 00:10:42.448 } 00:10:42.448 ], 00:10:42.448 "name": "Malloc1", 00:10:42.448 "num_blocks": 1048576, 00:10:42.448 "product_name": "Malloc disk", 00:10:42.448 "supported_io_types": { 00:10:42.448 "abort": true, 00:10:42.448 "compare": false, 00:10:42.448 "compare_and_write": false, 00:10:42.448 "copy": true, 00:10:42.448 "flush": true, 00:10:42.448 "get_zone_info": false, 00:10:42.448 "nvme_admin": false, 00:10:42.448 "nvme_io": false, 00:10:42.448 "nvme_io_md": false, 00:10:42.448 "nvme_iov_md": false, 00:10:42.448 "read": true, 00:10:42.448 "reset": true, 00:10:42.448 "seek_data": false, 00:10:42.448 "seek_hole": false, 00:10:42.448 "unmap": true, 00:10:42.448 "write": true, 00:10:42.448 "write_zeroes": true, 00:10:42.448 "zcopy": true, 00:10:42.448 "zone_append": false, 00:10:42.448 "zone_management": false 00:10:42.448 }, 00:10:42.448 "uuid": "7c348341-c023-4b29-ab1c-979fdb2906f3", 00:10:42.448 "zoned": false 00:10:42.448 } 00:10:42.448 ]' 00:10:42.448 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:10:42.449 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:10:42.449 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:10:42.449 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:10:42.449 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:10:42.449 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:10:42.449 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:42.449 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid=9ca3beaa-4765-4780-96a0-c04fd97e7435 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:10:42.707 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:42.707 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:10:42.707 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:10:42.707 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:10:42.707 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:10:44.606 14:15:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:10:44.606 14:15:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:10:44.606 14:15:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:10:44.606 14:15:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:10:44.606 14:15:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:10:44.606 14:15:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:10:44.606 14:15:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:44.606 14:15:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:44.606 14:15:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:44.606 14:15:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:44.606 14:15:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:44.606 14:15:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:44.606 14:15:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:44.606 14:15:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:44.606 14:15:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:44.606 14:15:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:44.606 14:15:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:44.606 14:15:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:44.606 14:15:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:45.977 14:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:45.977 14:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:45.977 14:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:45.977 14:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:45.977 14:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:45.977 ************************************ 00:10:45.977 START TEST filesystem_ext4 00:10:45.977 ************************************ 00:10:45.977 14:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:45.977 14:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:45.977 14:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:45.977 14:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:45.977 14:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:10:45.977 14:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:10:45.977 14:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:10:45.977 14:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local force 00:10:45.977 14:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:10:45.977 14:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:10:45.978 14:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:45.978 mke2fs 1.47.0 (5-Feb-2023) 00:10:45.978 Discarding device blocks: 0/522240 done 00:10:45.978 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:45.978 Filesystem UUID: 6c0d8b69-4d0e-45e1-b4f4-82671a8e2971 00:10:45.978 Superblock backups stored on blocks: 00:10:45.978 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:45.978 00:10:45.978 Allocating group tables: 0/64 done 00:10:45.978 Writing inode tables: 0/64 done 00:10:45.978 Creating journal (8192 blocks): done 00:10:45.978 Writing superblocks and filesystem accounting information: 0/64 done 00:10:45.978 00:10:45.978 14:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@947 -- # return 0 00:10:45.978 14:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:51.253 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:51.253 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:51.253 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:51.253 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:51.253 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:51.253 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:51.253 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 72383 00:10:51.253 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:51.253 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:51.253 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:51.253 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:51.253 ************************************ 00:10:51.253 END TEST filesystem_ext4 00:10:51.253 ************************************ 00:10:51.253 00:10:51.253 real 0m5.576s 00:10:51.253 user 0m0.018s 00:10:51.253 sys 0m0.061s 00:10:51.253 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:51.253 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:51.253 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:51.253 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:51.253 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:51.253 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:51.253 ************************************ 00:10:51.253 START TEST filesystem_btrfs 00:10:51.253 ************************************ 00:10:51.253 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:51.253 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:51.253 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:51.253 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:51.253 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:10:51.253 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:10:51.253 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:10:51.253 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local force 00:10:51.253 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:10:51.253 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:10:51.253 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:51.512 btrfs-progs v6.8.1 00:10:51.512 See https://btrfs.readthedocs.io for more information. 00:10:51.512 00:10:51.512 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:51.512 NOTE: several default settings have changed in version 5.15, please make sure 00:10:51.512 this does not affect your deployments: 00:10:51.512 - DUP for metadata (-m dup) 00:10:51.512 - enabled no-holes (-O no-holes) 00:10:51.512 - enabled free-space-tree (-R free-space-tree) 00:10:51.512 00:10:51.512 Label: (null) 00:10:51.512 UUID: a9054c5a-7327-4fab-959d-f7e14c2ef4b6 00:10:51.512 Node size: 16384 00:10:51.512 Sector size: 4096 (CPU page size: 4096) 00:10:51.512 Filesystem size: 510.00MiB 00:10:51.512 Block group profiles: 00:10:51.512 Data: single 8.00MiB 00:10:51.512 Metadata: DUP 32.00MiB 00:10:51.512 System: DUP 8.00MiB 00:10:51.512 SSD detected: yes 00:10:51.512 Zoned device: no 00:10:51.512 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:51.512 Checksum: crc32c 00:10:51.512 Number of devices: 1 00:10:51.512 Devices: 00:10:51.512 ID SIZE PATH 00:10:51.512 1 510.00MiB /dev/nvme0n1p1 00:10:51.512 00:10:51.512 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@947 -- # return 0 00:10:51.512 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:51.512 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:51.512 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:51.512 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:51.512 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:51.512 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:51.512 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:51.512 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 72383 00:10:51.512 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:51.512 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:51.512 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:51.512 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:51.512 ************************************ 00:10:51.512 END TEST filesystem_btrfs 00:10:51.512 ************************************ 00:10:51.512 00:10:51.512 real 0m0.179s 00:10:51.512 user 0m0.011s 00:10:51.512 sys 0m0.052s 00:10:51.512 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:51.512 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:51.512 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:51.512 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:51.512 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:51.512 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:51.512 ************************************ 00:10:51.512 START TEST filesystem_xfs 00:10:51.512 ************************************ 00:10:51.512 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:10:51.512 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:51.512 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:51.512 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:51.512 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:10:51.512 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:10:51.512 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local i=0 00:10:51.513 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local force 00:10:51.513 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:10:51.513 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@936 -- # force=-f 00:10:51.513 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:51.513 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:51.513 = sectsz=512 attr=2, projid32bit=1 00:10:51.513 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:51.513 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:51.513 data = bsize=4096 blocks=130560, imaxpct=25 00:10:51.513 = sunit=0 swidth=0 blks 00:10:51.513 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:51.513 log =internal log bsize=4096 blocks=16384, version=2 00:10:51.513 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:51.513 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:52.447 Discarding blocks...Done. 00:10:52.447 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@947 -- # return 0 00:10:52.447 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:54.990 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:54.990 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:54.990 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:54.990 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:54.990 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:54.990 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:54.990 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 72383 00:10:54.990 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:54.990 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:54.990 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:54.990 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:54.990 ************************************ 00:10:54.990 END TEST filesystem_xfs 00:10:54.990 ************************************ 00:10:54.990 00:10:54.990 real 0m3.042s 00:10:54.990 user 0m0.012s 00:10:54.990 sys 0m0.055s 00:10:54.990 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:54.990 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:54.990 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:54.990 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:54.990 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:54.990 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.990 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:54.990 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:10:54.990 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:10:54.990 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:54.990 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:54.990 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:10:54.990 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:10:54.990 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:54.990 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.990 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:54.990 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.990 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:54.990 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 72383 00:10:54.990 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 72383 ']' 00:10:54.990 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # kill -0 72383 00:10:54.990 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # uname 00:10:54.991 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:54.991 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72383 00:10:54.991 killing process with pid 72383 00:10:54.991 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:54.991 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:54.991 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72383' 00:10:54.991 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@971 -- # kill 72383 00:10:54.991 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@976 -- # wait 72383 00:10:56.362 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:56.362 00:10:56.362 real 0m15.409s 00:10:56.362 user 0m58.628s 00:10:56.362 sys 0m1.671s 00:10:56.362 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:56.362 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:56.362 ************************************ 00:10:56.362 END TEST nvmf_filesystem_no_in_capsule 00:10:56.362 ************************************ 00:10:56.362 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:56.362 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:56.362 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:56.362 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:56.362 ************************************ 00:10:56.362 START TEST nvmf_filesystem_in_capsule 00:10:56.362 ************************************ 00:10:56.362 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 4096 00:10:56.362 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:56.362 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:56.362 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:56.362 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:56.362 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:56.362 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=72766 00:10:56.362 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 72766 00:10:56.362 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 72766 ']' 00:10:56.362 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:56.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:56.362 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:56.362 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:56.362 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:56.362 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:56.362 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:56.619 [2024-11-06 14:15:24.021438] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:10:56.619 [2024-11-06 14:15:24.021560] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:56.619 [2024-11-06 14:15:24.184248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:56.876 [2024-11-06 14:15:24.288929] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:56.876 [2024-11-06 14:15:24.288995] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:56.876 [2024-11-06 14:15:24.289008] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:56.876 [2024-11-06 14:15:24.289015] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:56.876 [2024-11-06 14:15:24.289024] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:56.876 [2024-11-06 14:15:24.290411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:56.876 [2024-11-06 14:15:24.290522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:56.876 [2024-11-06 14:15:24.290878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.876 [2024-11-06 14:15:24.290882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:57.446 14:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:57.446 14:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:10:57.446 14:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:57.446 14:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:57.446 14:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.446 14:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:57.446 14:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:57.446 14:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:57.446 14:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.446 14:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.446 [2024-11-06 14:15:24.947913] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:57.446 14:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.446 14:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:57.446 14:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.446 14:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.705 Malloc1 00:10:57.705 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.705 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:57.705 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.705 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.705 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.705 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:57.705 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.705 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.705 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.705 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:57.705 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.705 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.705 [2024-11-06 14:15:25.334586] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:57.705 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.962 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:57.962 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:10:57.962 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:10:57.962 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:10:57.962 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:10:57.962 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:57.962 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.962 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.962 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.962 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:10:57.962 { 00:10:57.962 "aliases": [ 00:10:57.962 "4a8cfad7-6160-499a-9d76-4a2f2594aa1f" 00:10:57.962 ], 00:10:57.962 "assigned_rate_limits": { 00:10:57.962 "r_mbytes_per_sec": 0, 00:10:57.962 "rw_ios_per_sec": 0, 00:10:57.962 "rw_mbytes_per_sec": 0, 00:10:57.962 "w_mbytes_per_sec": 0 00:10:57.962 }, 00:10:57.962 "block_size": 512, 00:10:57.962 "claim_type": "exclusive_write", 00:10:57.962 "claimed": true, 00:10:57.962 "driver_specific": {}, 00:10:57.962 "memory_domains": [ 00:10:57.962 { 00:10:57.962 "dma_device_id": "system", 00:10:57.962 "dma_device_type": 1 00:10:57.962 }, 00:10:57.962 { 00:10:57.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.962 "dma_device_type": 2 00:10:57.962 } 00:10:57.962 ], 00:10:57.962 "name": "Malloc1", 00:10:57.962 "num_blocks": 1048576, 00:10:57.962 "product_name": "Malloc disk", 00:10:57.962 "supported_io_types": { 00:10:57.962 "abort": true, 00:10:57.962 "compare": false, 00:10:57.962 "compare_and_write": false, 00:10:57.962 "copy": true, 00:10:57.962 "flush": true, 00:10:57.962 "get_zone_info": false, 00:10:57.962 "nvme_admin": false, 00:10:57.962 "nvme_io": false, 00:10:57.962 "nvme_io_md": false, 00:10:57.962 "nvme_iov_md": false, 00:10:57.962 "read": true, 00:10:57.962 "reset": true, 00:10:57.962 "seek_data": false, 00:10:57.962 "seek_hole": false, 00:10:57.962 "unmap": true, 00:10:57.962 "write": true, 00:10:57.962 "write_zeroes": true, 00:10:57.962 "zcopy": true, 00:10:57.962 "zone_append": false, 00:10:57.962 "zone_management": false 00:10:57.962 }, 00:10:57.962 "uuid": "4a8cfad7-6160-499a-9d76-4a2f2594aa1f", 00:10:57.962 "zoned": false 00:10:57.962 } 00:10:57.962 ]' 00:10:57.962 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:10:57.962 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:10:57.962 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:10:57.962 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:10:57.962 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:10:57.962 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:10:57.962 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:57.962 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid=9ca3beaa-4765-4780-96a0-c04fd97e7435 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:10:57.962 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:57.962 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:10:57.962 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:10:57.962 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:10:57.962 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:11:00.510 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:11:00.510 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:11:00.510 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:11:00.510 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:11:00.510 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:11:00.510 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:11:00.510 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:00.510 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:00.510 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:00.510 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:00.510 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:00.510 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:00.511 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:00.511 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:00.511 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:00.511 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:00.511 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:00.511 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:00.511 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:01.441 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:01.441 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:01.441 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:01.441 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:01.441 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:01.441 ************************************ 00:11:01.441 START TEST filesystem_in_capsule_ext4 00:11:01.441 ************************************ 00:11:01.441 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:01.441 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:01.441 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:01.441 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:01.441 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:11:01.441 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:01.441 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:11:01.441 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local force 00:11:01.441 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:11:01.441 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:11:01.441 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:01.441 mke2fs 1.47.0 (5-Feb-2023) 00:11:01.441 Discarding device blocks: 0/522240 done 00:11:01.441 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:01.441 Filesystem UUID: b98339ce-20b7-46fe-a093-15d79fe03a84 00:11:01.441 Superblock backups stored on blocks: 00:11:01.441 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:01.441 00:11:01.441 Allocating group tables: 0/64 done 00:11:01.441 Writing inode tables: 0/64 done 00:11:01.441 Creating journal (8192 blocks): done 00:11:01.441 Writing superblocks and filesystem accounting information: 0/64 done 00:11:01.441 00:11:01.441 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@947 -- # return 0 00:11:01.441 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:06.698 14:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:06.956 14:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:06.956 14:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:06.956 14:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:06.956 14:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:06.956 14:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:06.956 14:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 72766 00:11:06.956 14:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:06.956 14:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:06.956 14:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:06.956 14:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:06.956 00:11:06.956 real 0m5.568s 00:11:06.956 user 0m0.022s 00:11:06.956 sys 0m0.050s 00:11:06.956 14:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:06.956 14:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:06.956 ************************************ 00:11:06.956 END TEST filesystem_in_capsule_ext4 00:11:06.956 ************************************ 00:11:06.956 14:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:06.956 14:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:06.956 14:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:06.956 14:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:06.956 ************************************ 00:11:06.956 START TEST filesystem_in_capsule_btrfs 00:11:06.956 ************************************ 00:11:06.956 14:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:06.956 14:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:06.956 14:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:06.956 14:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:06.956 14:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:11:06.956 14:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:06.956 14:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:11:06.956 14:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local force 00:11:06.956 14:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:11:06.956 14:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:11:06.956 14:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:06.956 btrfs-progs v6.8.1 00:11:06.956 See https://btrfs.readthedocs.io for more information. 00:11:06.956 00:11:06.956 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:06.956 NOTE: several default settings have changed in version 5.15, please make sure 00:11:06.956 this does not affect your deployments: 00:11:06.956 - DUP for metadata (-m dup) 00:11:06.956 - enabled no-holes (-O no-holes) 00:11:06.956 - enabled free-space-tree (-R free-space-tree) 00:11:06.956 00:11:06.956 Label: (null) 00:11:06.956 UUID: 95e559a9-654d-4688-9c6f-aa81d4007add 00:11:06.956 Node size: 16384 00:11:06.956 Sector size: 4096 (CPU page size: 4096) 00:11:06.956 Filesystem size: 510.00MiB 00:11:06.956 Block group profiles: 00:11:06.956 Data: single 8.00MiB 00:11:06.956 Metadata: DUP 32.00MiB 00:11:06.956 System: DUP 8.00MiB 00:11:06.956 SSD detected: yes 00:11:06.956 Zoned device: no 00:11:06.956 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:06.956 Checksum: crc32c 00:11:06.956 Number of devices: 1 00:11:06.956 Devices: 00:11:06.956 ID SIZE PATH 00:11:06.956 1 510.00MiB /dev/nvme0n1p1 00:11:06.956 00:11:06.956 14:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@947 -- # return 0 00:11:06.956 14:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:07.213 14:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:07.214 14:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:07.471 14:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:07.471 14:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:07.471 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:07.471 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:07.728 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 72766 00:11:07.728 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:07.728 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:07.728 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:07.728 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:07.728 00:11:07.728 real 0m0.634s 00:11:07.728 user 0m0.017s 00:11:07.728 sys 0m0.059s 00:11:07.728 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:07.728 ************************************ 00:11:07.728 END TEST filesystem_in_capsule_btrfs 00:11:07.728 ************************************ 00:11:07.728 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:07.728 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:07.728 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:07.728 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:07.728 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:07.728 ************************************ 00:11:07.728 START TEST filesystem_in_capsule_xfs 00:11:07.728 ************************************ 00:11:07.728 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:11:07.728 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:07.728 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:07.728 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:07.728 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:11:07.728 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:07.728 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local i=0 00:11:07.728 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local force 00:11:07.728 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:11:07.728 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@936 -- # force=-f 00:11:07.728 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:07.728 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:07.728 = sectsz=512 attr=2, projid32bit=1 00:11:07.728 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:07.728 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:07.728 data = bsize=4096 blocks=130560, imaxpct=25 00:11:07.728 = sunit=0 swidth=0 blks 00:11:07.728 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:07.728 log =internal log bsize=4096 blocks=16384, version=2 00:11:07.728 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:07.728 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:08.292 Discarding blocks...Done. 00:11:08.292 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@947 -- # return 0 00:11:08.292 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:10.190 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:10.190 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:10.190 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:10.447 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:10.447 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:10.447 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:10.447 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 72766 00:11:10.447 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:10.447 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:10.447 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:10.447 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:10.447 00:11:10.447 real 0m2.690s 00:11:10.447 user 0m0.020s 00:11:10.447 sys 0m0.044s 00:11:10.447 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:10.447 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:10.447 ************************************ 00:11:10.447 END TEST filesystem_in_capsule_xfs 00:11:10.447 ************************************ 00:11:10.447 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:10.447 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:10.447 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:10.447 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.447 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:10.447 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:11:10.447 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:10.447 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:11:10.447 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:11:10.447 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:10.447 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:11:10.447 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:10.447 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.447 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:10.447 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.447 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:10.447 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 72766 00:11:10.447 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 72766 ']' 00:11:10.447 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # kill -0 72766 00:11:10.447 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # uname 00:11:10.447 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:10.447 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72766 00:11:10.447 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:10.447 killing process with pid 72766 00:11:10.447 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:10.447 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72766' 00:11:10.447 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@971 -- # kill 72766 00:11:10.447 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@976 -- # wait 72766 00:11:11.857 ************************************ 00:11:11.857 END TEST nvmf_filesystem_in_capsule 00:11:11.857 ************************************ 00:11:11.857 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:11.857 00:11:11.857 real 0m15.370s 00:11:11.857 user 0m58.643s 00:11:11.857 sys 0m1.756s 00:11:11.857 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:11.857 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:11.857 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:11.857 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:11.857 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:12.117 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:12.117 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:12.117 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:12.117 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:12.117 rmmod nvme_tcp 00:11:12.117 rmmod nvme_fabrics 00:11:12.117 rmmod nvme_keyring 00:11:12.117 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:12.117 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:12.117 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:12.117 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:12.117 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:12.117 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:12.117 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:12.117 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:11:12.117 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:11:12.117 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:12.117 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:11:12.117 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:12.117 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:12.117 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:12.117 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:12.117 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:12.117 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:12.117 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:12.117 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:12.117 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:12.117 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:12.117 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:12.117 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:12.117 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:12.117 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:12.117 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:12.377 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:12.377 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:12.377 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:12.377 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:12.377 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@300 -- # return 0 00:11:12.377 ************************************ 00:11:12.377 END TEST nvmf_filesystem 00:11:12.377 ************************************ 00:11:12.377 00:11:12.377 real 0m31.983s 00:11:12.377 user 1m57.610s 00:11:12.377 sys 0m3.842s 00:11:12.377 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:12.377 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:12.377 14:15:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:12.377 14:15:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:12.377 14:15:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:12.377 14:15:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:12.377 ************************************ 00:11:12.377 START TEST nvmf_target_discovery 00:11:12.377 ************************************ 00:11:12.377 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:12.377 * Looking for test storage... 00:11:12.377 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:12.377 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:12.377 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:11:12.377 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:12.377 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:12.377 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:12.377 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:12.377 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:12.377 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:12.377 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:12.377 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:12.377 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:12.377 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:12.377 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:12.377 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:12.377 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:12.377 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:12.377 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:12.377 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:12.377 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:12.377 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:12.377 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:12.377 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:12.377 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:12.377 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:12.377 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:12.377 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:12.377 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:12.377 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:12.377 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:12.377 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:12.377 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:12.377 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:12.377 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:12.377 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:12.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.377 --rc genhtml_branch_coverage=1 00:11:12.377 --rc genhtml_function_coverage=1 00:11:12.377 --rc genhtml_legend=1 00:11:12.377 --rc geninfo_all_blocks=1 00:11:12.377 --rc geninfo_unexecuted_blocks=1 00:11:12.377 00:11:12.377 ' 00:11:12.377 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:12.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.377 --rc genhtml_branch_coverage=1 00:11:12.377 --rc genhtml_function_coverage=1 00:11:12.377 --rc genhtml_legend=1 00:11:12.377 --rc geninfo_all_blocks=1 00:11:12.377 --rc geninfo_unexecuted_blocks=1 00:11:12.377 00:11:12.377 ' 00:11:12.377 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:12.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.377 --rc genhtml_branch_coverage=1 00:11:12.377 --rc genhtml_function_coverage=1 00:11:12.377 --rc genhtml_legend=1 00:11:12.377 --rc geninfo_all_blocks=1 00:11:12.377 --rc geninfo_unexecuted_blocks=1 00:11:12.377 00:11:12.377 ' 00:11:12.377 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:12.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.377 --rc genhtml_branch_coverage=1 00:11:12.377 --rc genhtml_function_coverage=1 00:11:12.377 --rc genhtml_legend=1 00:11:12.377 --rc geninfo_all_blocks=1 00:11:12.377 --rc geninfo_unexecuted_blocks=1 00:11:12.377 00:11:12.377 ' 00:11:12.377 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:12.377 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:12.377 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:12.377 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:12.377 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:12.377 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:12.377 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:12.377 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:12.377 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:12.377 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:12.377 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:12.377 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:12.377 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:11:12.378 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:11:12.378 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:12.378 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:12.378 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:12.378 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:12.378 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:12.378 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:12.378 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:12.378 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:12.378 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:12.378 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.378 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.378 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.378 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:12.378 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.378 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:12.378 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:12.378 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:12.378 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:12.378 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:12.378 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:12.378 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:12.378 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:12.378 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:12.378 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:12.378 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:12.378 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:12.378 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:12.378 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:12.378 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:12.378 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:12.378 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:12.378 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:12.378 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:12.378 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:12.378 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:12.378 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:12.378 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:12.378 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:12.378 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:12.378 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:12.378 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:12.378 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:12.378 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:12.378 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:12.378 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:12.378 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:12.378 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:12.378 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:12.378 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:12.378 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:12.378 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:12.378 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:12.378 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:12.378 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:12.378 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:12.378 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:12.378 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:12.378 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:12.378 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:12.378 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:12.378 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:12.378 Cannot find device "nvmf_init_br" 00:11:12.378 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@162 -- # true 00:11:12.378 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:12.378 Cannot find device "nvmf_init_br2" 00:11:12.378 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@163 -- # true 00:11:12.378 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:12.636 Cannot find device "nvmf_tgt_br" 00:11:12.636 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@164 -- # true 00:11:12.636 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:12.636 Cannot find device "nvmf_tgt_br2" 00:11:12.636 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@165 -- # true 00:11:12.636 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:12.636 Cannot find device "nvmf_init_br" 00:11:12.637 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@166 -- # true 00:11:12.637 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:12.637 Cannot find device "nvmf_init_br2" 00:11:12.637 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@167 -- # true 00:11:12.637 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:12.637 Cannot find device "nvmf_tgt_br" 00:11:12.637 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@168 -- # true 00:11:12.637 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:12.637 Cannot find device "nvmf_tgt_br2" 00:11:12.637 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@169 -- # true 00:11:12.637 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:12.637 Cannot find device "nvmf_br" 00:11:12.637 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@170 -- # true 00:11:12.637 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:12.637 Cannot find device "nvmf_init_if" 00:11:12.637 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@171 -- # true 00:11:12.637 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:12.637 Cannot find device "nvmf_init_if2" 00:11:12.637 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@172 -- # true 00:11:12.637 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:12.637 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:12.637 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@173 -- # true 00:11:12.637 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:12.637 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:12.637 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@174 -- # true 00:11:12.637 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:12.637 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:12.637 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:12.637 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:12.637 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:12.637 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:12.637 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:12.637 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:12.637 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:12.637 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:12.637 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:12.637 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:12.637 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:12.637 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:12.637 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:12.637 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:12.637 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:12.637 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:12.637 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:12.637 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:12.637 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:12.637 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:12.637 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:12.637 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:12.637 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:12.637 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:12.637 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:12.637 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:12.637 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:12.637 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:12.637 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:12.637 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:12.637 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:12.637 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:12.637 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:11:12.637 00:11:12.637 --- 10.0.0.3 ping statistics --- 00:11:12.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:12.637 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:11:12.637 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:12.637 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:12.637 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.030 ms 00:11:12.637 00:11:12.637 --- 10.0.0.4 ping statistics --- 00:11:12.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:12.637 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:11:12.637 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:12.637 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:12.637 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:11:12.637 00:11:12.637 --- 10.0.0.1 ping statistics --- 00:11:12.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:12.637 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:11:12.637 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:12.637 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:12.637 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.042 ms 00:11:12.637 00:11:12.637 --- 10.0.0.2 ping statistics --- 00:11:12.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:12.637 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:11:12.637 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:12.637 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@461 -- # return 0 00:11:12.637 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:12.637 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:12.637 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:12.637 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:12.637 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:12.637 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:12.637 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:12.895 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:12.895 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:12.895 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:12.895 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:12.895 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=73372 00:11:12.895 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 73372 00:11:12.895 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:12.896 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@833 -- # '[' -z 73372 ']' 00:11:12.896 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:12.896 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:12.896 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:12.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:12.896 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:12.896 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:12.896 [2024-11-06 14:15:40.356868] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:11:12.896 [2024-11-06 14:15:40.356999] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:12.896 [2024-11-06 14:15:40.518076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:13.153 [2024-11-06 14:15:40.618908] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:13.153 [2024-11-06 14:15:40.618975] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:13.153 [2024-11-06 14:15:40.618987] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:13.153 [2024-11-06 14:15:40.618995] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:13.153 [2024-11-06 14:15:40.619004] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:13.153 [2024-11-06 14:15:40.620398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:13.153 [2024-11-06 14:15:40.621198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:13.153 [2024-11-06 14:15:40.621229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.153 [2024-11-06 14:15:40.621230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:13.717 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:13.717 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@866 -- # return 0 00:11:13.717 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:13.717 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:13.717 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:13.717 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:13.717 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:13.717 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.717 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:13.717 [2024-11-06 14:15:41.191999] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:13.717 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.717 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:13.717 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:13.717 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:13.717 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.717 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:13.717 Null1 00:11:13.717 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.717 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:13.717 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.717 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:13.717 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.717 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:13.717 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.717 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:13.717 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.717 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:13.717 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.717 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:13.717 [2024-11-06 14:15:41.236186] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:13.717 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.717 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:13.717 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:13.717 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.717 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:13.717 Null2 00:11:13.717 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.717 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:13.717 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.717 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:13.717 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.717 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:13.717 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.717 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:13.717 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.717 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:11:13.717 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.717 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:13.717 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.717 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:13.717 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:13.717 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.717 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:13.717 Null3 00:11:13.717 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.717 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:13.717 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.717 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:13.717 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.717 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:13.717 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.717 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:13.717 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.717 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:11:13.717 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.717 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:13.717 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.717 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:13.717 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:13.717 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.717 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:13.717 Null4 00:11:13.717 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.717 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:13.717 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.717 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:13.717 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.717 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:13.717 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.717 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:13.717 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.717 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.3 -s 4420 00:11:13.718 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.718 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:13.718 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.718 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:11:13.718 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.718 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:13.718 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.718 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.3 -s 4430 00:11:13.718 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.718 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:13.975 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.975 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid=9ca3beaa-4765-4780-96a0-c04fd97e7435 -t tcp -a 10.0.0.3 -s 4420 00:11:13.975 00:11:13.975 Discovery Log Number of Records 6, Generation counter 6 00:11:13.975 =====Discovery Log Entry 0====== 00:11:13.975 trtype: tcp 00:11:13.975 adrfam: ipv4 00:11:13.975 subtype: current discovery subsystem 00:11:13.975 treq: not required 00:11:13.975 portid: 0 00:11:13.975 trsvcid: 4420 00:11:13.975 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:13.975 traddr: 10.0.0.3 00:11:13.975 eflags: explicit discovery connections, duplicate discovery information 00:11:13.975 sectype: none 00:11:13.975 =====Discovery Log Entry 1====== 00:11:13.975 trtype: tcp 00:11:13.975 adrfam: ipv4 00:11:13.975 subtype: nvme subsystem 00:11:13.975 treq: not required 00:11:13.975 portid: 0 00:11:13.975 trsvcid: 4420 00:11:13.975 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:13.975 traddr: 10.0.0.3 00:11:13.975 eflags: none 00:11:13.975 sectype: none 00:11:13.975 =====Discovery Log Entry 2====== 00:11:13.975 trtype: tcp 00:11:13.975 adrfam: ipv4 00:11:13.975 subtype: nvme subsystem 00:11:13.975 treq: not required 00:11:13.975 portid: 0 00:11:13.975 trsvcid: 4420 00:11:13.975 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:13.975 traddr: 10.0.0.3 00:11:13.975 eflags: none 00:11:13.975 sectype: none 00:11:13.975 =====Discovery Log Entry 3====== 00:11:13.975 trtype: tcp 00:11:13.975 adrfam: ipv4 00:11:13.975 subtype: nvme subsystem 00:11:13.975 treq: not required 00:11:13.975 portid: 0 00:11:13.975 trsvcid: 4420 00:11:13.975 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:13.975 traddr: 10.0.0.3 00:11:13.975 eflags: none 00:11:13.975 sectype: none 00:11:13.975 =====Discovery Log Entry 4====== 00:11:13.975 trtype: tcp 00:11:13.975 adrfam: ipv4 00:11:13.975 subtype: nvme subsystem 00:11:13.975 treq: not required 00:11:13.975 portid: 0 00:11:13.975 trsvcid: 4420 00:11:13.975 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:13.975 traddr: 10.0.0.3 00:11:13.975 eflags: none 00:11:13.975 sectype: none 00:11:13.975 =====Discovery Log Entry 5====== 00:11:13.975 trtype: tcp 00:11:13.975 adrfam: ipv4 00:11:13.975 subtype: discovery subsystem referral 00:11:13.975 treq: not required 00:11:13.975 portid: 0 00:11:13.975 trsvcid: 4430 00:11:13.975 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:13.975 traddr: 10.0.0.3 00:11:13.975 eflags: none 00:11:13.975 sectype: none 00:11:13.975 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:13.975 Perform nvmf subsystem discovery via RPC 00:11:13.975 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:13.975 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.975 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:13.975 [ 00:11:13.975 { 00:11:13.975 "allow_any_host": true, 00:11:13.975 "hosts": [], 00:11:13.975 "listen_addresses": [ 00:11:13.975 { 00:11:13.975 "adrfam": "IPv4", 00:11:13.975 "traddr": "10.0.0.3", 00:11:13.975 "trsvcid": "4420", 00:11:13.975 "trtype": "TCP" 00:11:13.975 } 00:11:13.975 ], 00:11:13.975 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:13.975 "subtype": "Discovery" 00:11:13.975 }, 00:11:13.975 { 00:11:13.975 "allow_any_host": true, 00:11:13.975 "hosts": [], 00:11:13.975 "listen_addresses": [ 00:11:13.975 { 00:11:13.975 "adrfam": "IPv4", 00:11:13.975 "traddr": "10.0.0.3", 00:11:13.975 "trsvcid": "4420", 00:11:13.975 "trtype": "TCP" 00:11:13.975 } 00:11:13.975 ], 00:11:13.975 "max_cntlid": 65519, 00:11:13.975 "max_namespaces": 32, 00:11:13.975 "min_cntlid": 1, 00:11:13.975 "model_number": "SPDK bdev Controller", 00:11:13.975 "namespaces": [ 00:11:13.975 { 00:11:13.975 "bdev_name": "Null1", 00:11:13.975 "name": "Null1", 00:11:13.975 "nguid": "3A7D819680884666860E259D8F7FC60E", 00:11:13.975 "nsid": 1, 00:11:13.975 "uuid": "3a7d8196-8088-4666-860e-259d8f7fc60e" 00:11:13.975 } 00:11:13.975 ], 00:11:13.975 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:13.975 "serial_number": "SPDK00000000000001", 00:11:13.975 "subtype": "NVMe" 00:11:13.975 }, 00:11:13.975 { 00:11:13.975 "allow_any_host": true, 00:11:13.975 "hosts": [], 00:11:13.975 "listen_addresses": [ 00:11:13.975 { 00:11:13.975 "adrfam": "IPv4", 00:11:13.975 "traddr": "10.0.0.3", 00:11:13.975 "trsvcid": "4420", 00:11:13.975 "trtype": "TCP" 00:11:13.975 } 00:11:13.975 ], 00:11:13.975 "max_cntlid": 65519, 00:11:13.975 "max_namespaces": 32, 00:11:13.975 "min_cntlid": 1, 00:11:13.975 "model_number": "SPDK bdev Controller", 00:11:13.975 "namespaces": [ 00:11:13.975 { 00:11:13.975 "bdev_name": "Null2", 00:11:13.975 "name": "Null2", 00:11:13.975 "nguid": "A10AD94CDD9A43FF85DD4F49C891D59C", 00:11:13.975 "nsid": 1, 00:11:13.975 "uuid": "a10ad94c-dd9a-43ff-85dd-4f49c891d59c" 00:11:13.975 } 00:11:13.975 ], 00:11:13.975 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:13.975 "serial_number": "SPDK00000000000002", 00:11:13.975 "subtype": "NVMe" 00:11:13.975 }, 00:11:13.975 { 00:11:13.975 "allow_any_host": true, 00:11:13.975 "hosts": [], 00:11:13.975 "listen_addresses": [ 00:11:13.975 { 00:11:13.975 "adrfam": "IPv4", 00:11:13.975 "traddr": "10.0.0.3", 00:11:13.975 "trsvcid": "4420", 00:11:13.975 "trtype": "TCP" 00:11:13.975 } 00:11:13.975 ], 00:11:13.975 "max_cntlid": 65519, 00:11:13.975 "max_namespaces": 32, 00:11:13.975 "min_cntlid": 1, 00:11:13.975 "model_number": "SPDK bdev Controller", 00:11:13.975 "namespaces": [ 00:11:13.975 { 00:11:13.975 "bdev_name": "Null3", 00:11:13.975 "name": "Null3", 00:11:13.975 "nguid": "617AAE371AA74E1C9D062FCEB8033A82", 00:11:13.975 "nsid": 1, 00:11:13.975 "uuid": "617aae37-1aa7-4e1c-9d06-2fceb8033a82" 00:11:13.975 } 00:11:13.975 ], 00:11:13.975 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:13.975 "serial_number": "SPDK00000000000003", 00:11:13.975 "subtype": "NVMe" 00:11:13.975 }, 00:11:13.975 { 00:11:13.975 "allow_any_host": true, 00:11:13.975 "hosts": [], 00:11:13.975 "listen_addresses": [ 00:11:13.975 { 00:11:13.975 "adrfam": "IPv4", 00:11:13.975 "traddr": "10.0.0.3", 00:11:13.975 "trsvcid": "4420", 00:11:13.975 "trtype": "TCP" 00:11:13.975 } 00:11:13.975 ], 00:11:13.975 "max_cntlid": 65519, 00:11:13.975 "max_namespaces": 32, 00:11:13.975 "min_cntlid": 1, 00:11:13.975 "model_number": "SPDK bdev Controller", 00:11:13.975 "namespaces": [ 00:11:13.975 { 00:11:13.975 "bdev_name": "Null4", 00:11:13.975 "name": "Null4", 00:11:13.975 "nguid": "09077CC04FA14B75B90F0FB927FA0254", 00:11:13.975 "nsid": 1, 00:11:13.975 "uuid": "09077cc0-4fa1-4b75-b90f-0fb927fa0254" 00:11:13.975 } 00:11:13.975 ], 00:11:13.975 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:13.975 "serial_number": "SPDK00000000000004", 00:11:13.975 "subtype": "NVMe" 00:11:13.975 } 00:11:13.975 ] 00:11:13.975 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.975 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:13.975 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:13.975 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:13.975 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.975 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:13.975 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.975 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:13.975 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.975 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:13.975 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.975 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:13.975 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:13.975 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.975 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:13.975 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.975 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:13.975 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.975 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:13.975 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.975 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:13.975 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:13.976 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.976 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:13.976 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.976 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:13.976 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.976 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:13.976 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.976 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:13.976 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:13.976 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.976 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:13.976 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.976 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:13.976 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.976 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:13.976 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.976 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.3 -s 4430 00:11:13.976 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.976 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:13.976 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.976 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:13.976 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.976 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:13.976 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:13.976 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.976 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:13.976 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:13.976 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:13.976 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:13.976 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:13.976 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:15.347 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:15.347 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:15.347 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:15.347 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:15.347 rmmod nvme_tcp 00:11:15.347 rmmod nvme_fabrics 00:11:15.347 rmmod nvme_keyring 00:11:15.347 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:15.347 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:15.347 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:15.347 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 73372 ']' 00:11:15.347 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 73372 00:11:15.347 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@952 -- # '[' -z 73372 ']' 00:11:15.347 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # kill -0 73372 00:11:15.347 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # uname 00:11:15.347 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:15.347 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73372 00:11:15.347 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:15.347 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:15.347 killing process with pid 73372 00:11:15.347 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73372' 00:11:15.347 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@971 -- # kill 73372 00:11:15.347 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@976 -- # wait 73372 00:11:15.911 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:15.911 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:15.911 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:15.911 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:11:15.911 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:11:15.911 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:15.911 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:11:15.911 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:15.911 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:15.911 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:15.911 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:15.911 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:15.911 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:15.911 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:15.911 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:15.911 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:15.911 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:15.911 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:15.911 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:15.911 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:15.911 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:16.168 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:16.168 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:16.168 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:16.168 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:16.168 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:16.168 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@300 -- # return 0 00:11:16.168 00:11:16.168 real 0m3.776s 00:11:16.168 user 0m10.912s 00:11:16.168 sys 0m0.656s 00:11:16.168 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:16.168 ************************************ 00:11:16.168 END TEST nvmf_target_discovery 00:11:16.168 ************************************ 00:11:16.168 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:16.168 14:15:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:16.168 14:15:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:16.168 14:15:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:16.168 14:15:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:16.168 ************************************ 00:11:16.168 START TEST nvmf_referrals 00:11:16.168 ************************************ 00:11:16.168 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:16.169 * Looking for test storage... 00:11:16.169 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:16.169 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:16.169 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lcov --version 00:11:16.169 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:16.169 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:16.169 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:16.169 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:16.169 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:16.169 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:16.169 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:16.169 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:16.169 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:16.169 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:16.169 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:16.169 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:16.169 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:16.169 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:16.169 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:16.169 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:16.169 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:16.169 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:16.169 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:16.169 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:16.169 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:16.169 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:16.169 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:16.169 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:16.169 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:16.169 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:16.169 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:16.169 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:16.169 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:16.169 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:16.169 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:16.169 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:16.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.169 --rc genhtml_branch_coverage=1 00:11:16.169 --rc genhtml_function_coverage=1 00:11:16.169 --rc genhtml_legend=1 00:11:16.169 --rc geninfo_all_blocks=1 00:11:16.169 --rc geninfo_unexecuted_blocks=1 00:11:16.169 00:11:16.169 ' 00:11:16.169 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:16.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.169 --rc genhtml_branch_coverage=1 00:11:16.169 --rc genhtml_function_coverage=1 00:11:16.169 --rc genhtml_legend=1 00:11:16.169 --rc geninfo_all_blocks=1 00:11:16.169 --rc geninfo_unexecuted_blocks=1 00:11:16.169 00:11:16.169 ' 00:11:16.169 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:16.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.169 --rc genhtml_branch_coverage=1 00:11:16.169 --rc genhtml_function_coverage=1 00:11:16.169 --rc genhtml_legend=1 00:11:16.169 --rc geninfo_all_blocks=1 00:11:16.169 --rc geninfo_unexecuted_blocks=1 00:11:16.169 00:11:16.169 ' 00:11:16.169 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:16.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.169 --rc genhtml_branch_coverage=1 00:11:16.169 --rc genhtml_function_coverage=1 00:11:16.169 --rc genhtml_legend=1 00:11:16.169 --rc geninfo_all_blocks=1 00:11:16.169 --rc geninfo_unexecuted_blocks=1 00:11:16.169 00:11:16.169 ' 00:11:16.169 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:16.169 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:16.169 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:16.169 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:16.169 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:16.169 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:16.169 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:16.169 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:16.169 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:16.169 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:16.169 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:16.169 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:16.169 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:11:16.169 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:11:16.169 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:16.169 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:16.169 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:16.169 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:16.169 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:16.169 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:16.169 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:16.169 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:16.169 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:16.169 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.169 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.169 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.169 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:16.169 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.169 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:16.169 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:16.169 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:16.169 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:16.169 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:16.169 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:16.169 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:16.169 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:16.169 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:16.169 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:16.169 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:16.169 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:16.169 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:16.169 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:16.170 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:16.170 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:16.170 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:16.170 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:16.170 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:16.170 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:16.170 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:16.170 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:16.170 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:16.170 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:16.170 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:16.170 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:16.427 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:16.427 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:16.427 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:16.427 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:16.427 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:16.427 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:16.427 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:16.427 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:16.427 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:16.427 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:16.427 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:16.427 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:16.427 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:16.427 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:16.427 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:16.427 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:16.427 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:16.427 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:16.427 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:16.427 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:16.427 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:16.427 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:16.427 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:16.427 Cannot find device "nvmf_init_br" 00:11:16.427 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@162 -- # true 00:11:16.427 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:16.427 Cannot find device "nvmf_init_br2" 00:11:16.427 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@163 -- # true 00:11:16.427 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:16.427 Cannot find device "nvmf_tgt_br" 00:11:16.427 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@164 -- # true 00:11:16.427 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:16.427 Cannot find device "nvmf_tgt_br2" 00:11:16.427 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@165 -- # true 00:11:16.427 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:16.427 Cannot find device "nvmf_init_br" 00:11:16.427 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@166 -- # true 00:11:16.427 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:16.427 Cannot find device "nvmf_init_br2" 00:11:16.427 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@167 -- # true 00:11:16.427 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:16.427 Cannot find device "nvmf_tgt_br" 00:11:16.427 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@168 -- # true 00:11:16.427 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:16.427 Cannot find device "nvmf_tgt_br2" 00:11:16.427 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@169 -- # true 00:11:16.427 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:16.427 Cannot find device "nvmf_br" 00:11:16.427 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@170 -- # true 00:11:16.427 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:16.427 Cannot find device "nvmf_init_if" 00:11:16.427 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@171 -- # true 00:11:16.427 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:16.427 Cannot find device "nvmf_init_if2" 00:11:16.427 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@172 -- # true 00:11:16.427 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:16.427 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:16.427 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@173 -- # true 00:11:16.427 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:16.427 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:16.427 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@174 -- # true 00:11:16.427 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:16.427 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:16.427 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:16.427 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:16.427 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:16.427 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:16.427 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:16.427 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:16.427 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:16.427 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:16.427 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:16.427 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:16.428 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:16.428 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:16.428 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:16.428 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:16.428 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:16.428 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:16.428 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:16.428 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:16.428 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:16.428 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:16.428 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:16.685 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:16.685 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:16.685 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:16.685 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:16.685 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:16.685 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:16.685 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:16.685 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:16.685 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:16.685 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:16.685 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:16.685 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.105 ms 00:11:16.685 00:11:16.685 --- 10.0.0.3 ping statistics --- 00:11:16.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:16.685 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:11:16.685 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:16.685 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:16.685 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.035 ms 00:11:16.685 00:11:16.685 --- 10.0.0.4 ping statistics --- 00:11:16.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:16.685 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:11:16.685 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:16.685 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:16.685 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:11:16.685 00:11:16.685 --- 10.0.0.1 ping statistics --- 00:11:16.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:16.685 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:11:16.685 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:16.685 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:16.685 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:11:16.685 00:11:16.685 --- 10.0.0.2 ping statistics --- 00:11:16.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:16.685 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:11:16.685 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:16.685 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@461 -- # return 0 00:11:16.685 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:16.685 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:16.685 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:16.685 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:16.685 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:16.685 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:16.685 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:16.685 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:16.685 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:16.685 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:16.686 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:16.686 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=73667 00:11:16.686 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 73667 00:11:16.686 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@833 -- # '[' -z 73667 ']' 00:11:16.686 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:16.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:16.686 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:16.686 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:16.686 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:16.686 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:16.686 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:16.686 [2024-11-06 14:15:44.220916] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:11:16.686 [2024-11-06 14:15:44.221420] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:16.943 [2024-11-06 14:15:44.380614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:16.943 [2024-11-06 14:15:44.487632] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:16.943 [2024-11-06 14:15:44.487682] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:16.943 [2024-11-06 14:15:44.487694] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:16.943 [2024-11-06 14:15:44.487701] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:16.943 [2024-11-06 14:15:44.487711] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:16.943 [2024-11-06 14:15:44.489201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:16.943 [2024-11-06 14:15:44.489292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:16.943 [2024-11-06 14:15:44.489383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.943 [2024-11-06 14:15:44.489412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:17.508 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:17.508 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@866 -- # return 0 00:11:17.508 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:17.508 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:17.508 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:17.508 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:17.508 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:17.508 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.508 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:17.508 [2024-11-06 14:15:45.062267] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:17.508 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.508 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.3 -s 8009 discovery 00:11:17.508 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.508 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:17.508 [2024-11-06 14:15:45.078468] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:11:17.508 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.508 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:17.508 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.508 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:17.508 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.508 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:17.508 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.508 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:17.508 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.508 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:17.508 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.508 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:17.508 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.508 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:17.508 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.508 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:17.508 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:17.508 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.508 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:17.508 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:17.508 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:17.508 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:17.508 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:17.508 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.508 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:17.508 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:17.765 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.765 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:17.765 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:17.765 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:17.765 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:17.765 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:17.765 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:17.765 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid=9ca3beaa-4765-4780-96a0-c04fd97e7435 -t tcp -a 10.0.0.3 -s 8009 -o json 00:11:17.765 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:17.765 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:17.766 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:17.766 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:17.766 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.766 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:17.766 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.766 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:17.766 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.766 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:17.766 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.766 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:17.766 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.766 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:17.766 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.766 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:17.766 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:17.766 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.766 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:17.766 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.766 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:17.766 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:17.766 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:17.766 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:17.766 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid=9ca3beaa-4765-4780-96a0-c04fd97e7435 -t tcp -a 10.0.0.3 -s 8009 -o json 00:11:17.766 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:17.766 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:18.024 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:18.024 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:18.024 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:18.024 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.024 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:18.024 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.024 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:18.024 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.024 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:18.024 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.024 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:18.024 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:18.024 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:18.024 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:18.024 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.024 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:18.024 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:18.024 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.024 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:18.024 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:18.024 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:18.024 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:18.024 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:18.024 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:18.024 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid=9ca3beaa-4765-4780-96a0-c04fd97e7435 -t tcp -a 10.0.0.3 -s 8009 -o json 00:11:18.024 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:18.282 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:18.282 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:18.282 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:18.282 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:18.282 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:18.282 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid=9ca3beaa-4765-4780-96a0-c04fd97e7435 -t tcp -a 10.0.0.3 -s 8009 -o json 00:11:18.282 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:18.282 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:18.282 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:18.282 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:18.282 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:18.282 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid=9ca3beaa-4765-4780-96a0-c04fd97e7435 -t tcp -a 10.0.0.3 -s 8009 -o json 00:11:18.282 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:18.282 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:18.282 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:18.282 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.282 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:18.282 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.282 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:18.282 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:18.282 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:18.282 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.282 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:18.282 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:18.282 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:18.539 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.539 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:18.539 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:18.539 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:18.539 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:18.539 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:18.539 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:18.539 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:18.539 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid=9ca3beaa-4765-4780-96a0-c04fd97e7435 -t tcp -a 10.0.0.3 -s 8009 -o json 00:11:18.539 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:18.539 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:18.539 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:18.539 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:18.539 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:18.539 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid=9ca3beaa-4765-4780-96a0-c04fd97e7435 -t tcp -a 10.0.0.3 -s 8009 -o json 00:11:18.539 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:18.539 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:18.539 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:18.539 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:18.539 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:18.801 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid=9ca3beaa-4765-4780-96a0-c04fd97e7435 -t tcp -a 10.0.0.3 -s 8009 -o json 00:11:18.801 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:18.801 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:18.801 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:18.801 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.801 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:18.801 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.801 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:18.801 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:18.801 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.801 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:18.801 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.801 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:18.801 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:18.801 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:18.801 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:18.801 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid=9ca3beaa-4765-4780-96a0-c04fd97e7435 -t tcp -a 10.0.0.3 -s 8009 -o json 00:11:18.801 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:18.801 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:19.075 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:19.075 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:19.075 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:19.075 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:19.075 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:19.075 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:11:19.334 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:19.334 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:11:19.334 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:19.334 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:19.334 rmmod nvme_tcp 00:11:19.334 rmmod nvme_fabrics 00:11:19.334 rmmod nvme_keyring 00:11:19.334 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:19.334 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:11:19.334 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:11:19.334 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 73667 ']' 00:11:19.334 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 73667 00:11:19.334 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@952 -- # '[' -z 73667 ']' 00:11:19.334 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # kill -0 73667 00:11:19.334 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # uname 00:11:19.334 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:19.334 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73667 00:11:19.334 killing process with pid 73667 00:11:19.334 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:19.334 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:19.334 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73667' 00:11:19.334 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@971 -- # kill 73667 00:11:19.334 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@976 -- # wait 73667 00:11:20.267 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:20.267 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:20.267 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:20.267 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:11:20.267 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:11:20.267 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:11:20.267 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:20.267 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:20.267 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:20.267 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:20.267 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:20.267 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:20.267 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:20.267 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:20.267 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:20.267 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:20.267 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:20.267 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:20.267 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:20.267 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:20.267 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:20.267 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:20.267 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:20.267 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:20.267 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:20.267 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:20.267 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@300 -- # return 0 00:11:20.267 00:11:20.267 real 0m4.143s 00:11:20.267 user 0m12.263s 00:11:20.267 sys 0m0.912s 00:11:20.267 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:20.267 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:20.267 ************************************ 00:11:20.267 END TEST nvmf_referrals 00:11:20.267 ************************************ 00:11:20.267 14:15:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:20.267 14:15:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:20.267 14:15:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:20.268 14:15:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:20.268 ************************************ 00:11:20.268 START TEST nvmf_connect_disconnect 00:11:20.268 ************************************ 00:11:20.268 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:20.268 * Looking for test storage... 00:11:20.268 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:20.268 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:20.268 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:11:20.268 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:20.527 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:20.527 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:20.527 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:20.527 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:20.527 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:11:20.527 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:11:20.527 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:11:20.527 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:11:20.527 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:11:20.527 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:11:20.527 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:11:20.527 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:20.527 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:11:20.527 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:11:20.527 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:20.527 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:20.527 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:11:20.527 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:11:20.527 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:20.527 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:11:20.527 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:11:20.527 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:11:20.527 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:11:20.527 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:20.527 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:11:20.527 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:11:20.527 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:20.527 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:20.527 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:11:20.527 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:20.527 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:20.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.527 --rc genhtml_branch_coverage=1 00:11:20.527 --rc genhtml_function_coverage=1 00:11:20.527 --rc genhtml_legend=1 00:11:20.527 --rc geninfo_all_blocks=1 00:11:20.527 --rc geninfo_unexecuted_blocks=1 00:11:20.527 00:11:20.527 ' 00:11:20.527 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:20.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.527 --rc genhtml_branch_coverage=1 00:11:20.527 --rc genhtml_function_coverage=1 00:11:20.527 --rc genhtml_legend=1 00:11:20.527 --rc geninfo_all_blocks=1 00:11:20.527 --rc geninfo_unexecuted_blocks=1 00:11:20.527 00:11:20.527 ' 00:11:20.527 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:20.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.527 --rc genhtml_branch_coverage=1 00:11:20.527 --rc genhtml_function_coverage=1 00:11:20.527 --rc genhtml_legend=1 00:11:20.527 --rc geninfo_all_blocks=1 00:11:20.527 --rc geninfo_unexecuted_blocks=1 00:11:20.527 00:11:20.527 ' 00:11:20.527 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:20.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.527 --rc genhtml_branch_coverage=1 00:11:20.527 --rc genhtml_function_coverage=1 00:11:20.527 --rc genhtml_legend=1 00:11:20.527 --rc geninfo_all_blocks=1 00:11:20.527 --rc geninfo_unexecuted_blocks=1 00:11:20.527 00:11:20.527 ' 00:11:20.527 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:20.527 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:20.527 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:20.527 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:20.527 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:20.527 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:20.527 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:20.527 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:20.527 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:20.527 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:20.527 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:20.527 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:20.527 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:11:20.527 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:11:20.527 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:20.527 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:20.527 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:20.527 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:20.527 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:20.527 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:11:20.527 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:20.527 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:20.527 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:20.527 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.528 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.528 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.528 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:20.528 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.528 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:11:20.528 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:20.528 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:20.528 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:20.528 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:20.528 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:20.528 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:20.528 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:20.528 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:20.528 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:20.528 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:20.528 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:20.528 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:20.528 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:20.528 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:20.528 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:20.528 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:20.528 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:20.528 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:20.528 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:20.528 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:20.528 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:20.528 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:20.528 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:20.528 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:20.528 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:20.528 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:20.528 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:20.528 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:20.528 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:20.528 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:20.528 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:20.528 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:20.528 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:20.528 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:20.528 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:20.528 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:20.528 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:20.528 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:20.528 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:20.528 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:20.528 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:20.528 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:20.528 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:20.528 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:20.528 Cannot find device "nvmf_init_br" 00:11:20.528 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # true 00:11:20.528 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:20.528 Cannot find device "nvmf_init_br2" 00:11:20.528 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # true 00:11:20.528 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:20.528 Cannot find device "nvmf_tgt_br" 00:11:20.528 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@164 -- # true 00:11:20.528 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:20.528 Cannot find device "nvmf_tgt_br2" 00:11:20.528 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@165 -- # true 00:11:20.528 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:20.528 Cannot find device "nvmf_init_br" 00:11:20.528 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # true 00:11:20.528 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:20.528 Cannot find device "nvmf_init_br2" 00:11:20.528 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@167 -- # true 00:11:20.528 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:20.528 Cannot find device "nvmf_tgt_br" 00:11:20.528 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@168 -- # true 00:11:20.528 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:20.528 Cannot find device "nvmf_tgt_br2" 00:11:20.528 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # true 00:11:20.528 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:20.528 Cannot find device "nvmf_br" 00:11:20.528 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # true 00:11:20.528 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:20.528 Cannot find device "nvmf_init_if" 00:11:20.528 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # true 00:11:20.528 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:20.528 Cannot find device "nvmf_init_if2" 00:11:20.528 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@172 -- # true 00:11:20.528 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:20.528 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:20.528 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@173 -- # true 00:11:20.528 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:20.528 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:20.528 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # true 00:11:20.528 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:20.528 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:20.528 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:20.528 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:20.528 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:20.529 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:20.529 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:20.788 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:20.788 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:20.788 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:20.788 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:20.788 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:20.788 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:20.788 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:20.788 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:20.788 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:20.788 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:20.788 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:20.788 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:20.788 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:20.788 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:20.788 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:20.788 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:20.788 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:20.788 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:20.788 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:20.788 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:20.788 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:20.788 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:20.788 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:20.788 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:20.788 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:20.788 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:20.788 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:20.788 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:11:20.788 00:11:20.788 --- 10.0.0.3 ping statistics --- 00:11:20.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:20.788 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:11:20.788 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:20.788 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:20.788 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.028 ms 00:11:20.788 00:11:20.788 --- 10.0.0.4 ping statistics --- 00:11:20.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:20.788 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:11:20.788 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:20.788 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:20.788 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:11:20.788 00:11:20.788 --- 10.0.0.1 ping statistics --- 00:11:20.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:20.788 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:11:20.788 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:20.788 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:20.788 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:11:20.788 00:11:20.788 --- 10.0.0.2 ping statistics --- 00:11:20.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:20.788 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:11:20.788 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:20.788 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@461 -- # return 0 00:11:20.788 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:20.788 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:20.788 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:20.788 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:20.788 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:20.788 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:20.788 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:20.788 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:20.788 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:20.788 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:20.788 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:20.788 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=74031 00:11:20.788 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:20.788 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 74031 00:11:20.788 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # '[' -z 74031 ']' 00:11:20.788 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:20.788 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:20.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:20.788 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:20.789 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:20.789 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:20.789 [2024-11-06 14:15:48.358003] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:11:20.789 [2024-11-06 14:15:48.358122] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:21.059 [2024-11-06 14:15:48.519757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:21.059 [2024-11-06 14:15:48.623711] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:21.059 [2024-11-06 14:15:48.623767] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:21.059 [2024-11-06 14:15:48.623779] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:21.059 [2024-11-06 14:15:48.623787] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:21.060 [2024-11-06 14:15:48.623797] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:21.060 [2024-11-06 14:15:48.625236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:21.060 [2024-11-06 14:15:48.625355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:21.060 [2024-11-06 14:15:48.625713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:21.060 [2024-11-06 14:15:48.625792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.624 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:21.624 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@866 -- # return 0 00:11:21.624 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:21.624 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:21.624 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:21.624 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:21.624 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:21.624 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.624 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:21.624 [2024-11-06 14:15:49.165812] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:21.624 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.624 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:21.624 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.624 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:21.624 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.624 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:21.624 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:21.624 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.624 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:21.624 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.624 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:21.624 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.624 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:21.882 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.882 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:21.882 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.882 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:21.882 [2024-11-06 14:15:49.264506] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:21.882 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.882 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:11:21.882 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:11:21.882 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:11:21.882 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:24.424 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:26.322 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:28.848 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.786 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:33.312 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:35.210 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:37.769 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.664 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.194 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.118 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.645 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.557 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.095 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:52.993 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.531 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.427 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.952 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.504 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:04.402 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.929 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.452 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:11.349 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:13.873 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.770 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.348 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:20.246 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:22.779 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.677 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.205 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:29.742 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.641 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.222 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:36.118 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:38.644 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.541 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.066 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.973 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.502 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:50.031 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.930 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:54.454 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:56.983 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.877 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.412 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.308 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:05.836 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:08.385 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:10.283 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.825 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:14.722 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:17.247 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:19.145 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:21.707 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:23.606 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:26.145 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:28.044 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:30.569 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:32.470 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:35.007 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:36.906 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:39.433 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:42.012 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:43.912 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:46.447 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:48.344 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:50.869 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:53.394 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:55.341 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:57.867 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:59.770 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:02.300 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:04.200 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:06.730 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:09.300 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:11.202 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:13.104 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:15.636 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:18.184 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:20.084 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:22.626 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:24.524 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:27.051 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:29.581 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:31.479 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:34.009 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:35.914 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:38.442 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:40.343 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:42.888 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:44.785 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:47.308 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:49.203 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:51.731 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:54.258 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:56.158 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:58.688 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:01.214 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:03.122 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:05.647 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:07.546 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:07.546 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:15:07.547 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:15:07.547 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:07.547 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:15:07.547 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:07.547 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:15:07.547 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:07.547 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:07.547 rmmod nvme_tcp 00:15:07.547 rmmod nvme_fabrics 00:15:07.547 rmmod nvme_keyring 00:15:07.805 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:07.805 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:15:07.805 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:15:07.805 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 74031 ']' 00:15:07.805 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 74031 00:15:07.805 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # '[' -z 74031 ']' 00:15:07.805 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # kill -0 74031 00:15:07.805 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # uname 00:15:07.805 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:07.805 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74031 00:15:07.805 killing process with pid 74031 00:15:07.805 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:07.805 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:07.805 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74031' 00:15:07.805 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@971 -- # kill 74031 00:15:07.805 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@976 -- # wait 74031 00:15:08.371 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:08.371 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:08.371 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:08.371 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:15:08.371 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:15:08.371 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:08.371 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:15:08.371 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:08.371 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:08.371 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:08.371 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:08.371 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:08.371 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:08.629 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:08.629 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:08.629 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:08.629 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:08.629 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:08.629 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:08.629 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:08.629 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:08.629 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:08.629 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:08.629 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:08.629 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:08.629 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:08.629 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@300 -- # return 0 00:15:08.629 ************************************ 00:15:08.629 END TEST nvmf_connect_disconnect 00:15:08.629 ************************************ 00:15:08.629 00:15:08.629 real 3m48.385s 00:15:08.629 user 14m49.787s 00:15:08.629 sys 0m20.014s 00:15:08.629 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:08.629 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:08.629 14:19:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:15:08.629 14:19:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:08.629 14:19:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:08.629 14:19:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:08.629 ************************************ 00:15:08.629 START TEST nvmf_multitarget 00:15:08.629 ************************************ 00:15:08.629 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:15:08.889 * Looking for test storage... 00:15:08.889 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:08.889 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:08.889 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:08.889 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lcov --version 00:15:08.889 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:08.889 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:08.889 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:08.889 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:08.889 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:15:08.889 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:15:08.889 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:15:08.889 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:15:08.889 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:15:08.889 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:15:08.889 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:15:08.889 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:08.889 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:15:08.889 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:15:08.889 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:08.889 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:08.889 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:15:08.889 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:15:08.889 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:08.889 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:15:08.889 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:15:08.889 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:15:08.889 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:15:08.889 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:08.889 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:15:08.889 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:15:08.889 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:08.889 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:08.889 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:15:08.889 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:08.889 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:08.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:08.889 --rc genhtml_branch_coverage=1 00:15:08.889 --rc genhtml_function_coverage=1 00:15:08.889 --rc genhtml_legend=1 00:15:08.889 --rc geninfo_all_blocks=1 00:15:08.889 --rc geninfo_unexecuted_blocks=1 00:15:08.889 00:15:08.889 ' 00:15:08.889 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:08.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:08.889 --rc genhtml_branch_coverage=1 00:15:08.889 --rc genhtml_function_coverage=1 00:15:08.889 --rc genhtml_legend=1 00:15:08.889 --rc geninfo_all_blocks=1 00:15:08.889 --rc geninfo_unexecuted_blocks=1 00:15:08.889 00:15:08.889 ' 00:15:08.889 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:08.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:08.889 --rc genhtml_branch_coverage=1 00:15:08.889 --rc genhtml_function_coverage=1 00:15:08.889 --rc genhtml_legend=1 00:15:08.889 --rc geninfo_all_blocks=1 00:15:08.889 --rc geninfo_unexecuted_blocks=1 00:15:08.889 00:15:08.889 ' 00:15:08.889 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:08.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:08.889 --rc genhtml_branch_coverage=1 00:15:08.889 --rc genhtml_function_coverage=1 00:15:08.889 --rc genhtml_legend=1 00:15:08.889 --rc geninfo_all_blocks=1 00:15:08.889 --rc geninfo_unexecuted_blocks=1 00:15:08.889 00:15:08.889 ' 00:15:08.889 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:08.889 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:15:08.889 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:08.889 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:08.889 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:08.889 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:08.889 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:08.889 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:08.889 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:08.889 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:08.889 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:08.889 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:08.889 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:15:08.889 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:15:08.889 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:08.889 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:08.889 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:08.889 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:08.889 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:08.890 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:15:08.890 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:08.890 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:08.890 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:08.890 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.890 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.890 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.890 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:15:08.890 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.890 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:15:08.890 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:08.890 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:08.890 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:08.890 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:08.890 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:08.890 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:08.890 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:08.890 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:08.890 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:08.890 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:08.890 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:15:08.890 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:15:08.890 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:08.890 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:08.890 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:08.890 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:08.890 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:08.890 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:08.890 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:08.890 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:08.890 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:08.890 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:08.890 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:08.890 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:08.890 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:08.890 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:08.890 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:08.890 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:08.890 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:08.890 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:08.890 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:08.890 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:08.890 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:08.890 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:08.890 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:08.890 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:08.890 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:08.890 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:08.890 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:08.890 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:08.890 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:08.890 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:08.890 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:08.890 Cannot find device "nvmf_init_br" 00:15:08.890 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@162 -- # true 00:15:08.890 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:08.890 Cannot find device "nvmf_init_br2" 00:15:08.890 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@163 -- # true 00:15:08.890 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:08.890 Cannot find device "nvmf_tgt_br" 00:15:08.890 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@164 -- # true 00:15:08.890 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:08.890 Cannot find device "nvmf_tgt_br2" 00:15:08.890 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@165 -- # true 00:15:08.890 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:08.890 Cannot find device "nvmf_init_br" 00:15:08.890 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@166 -- # true 00:15:08.890 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:08.890 Cannot find device "nvmf_init_br2" 00:15:08.890 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@167 -- # true 00:15:08.890 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:08.890 Cannot find device "nvmf_tgt_br" 00:15:08.890 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@168 -- # true 00:15:08.890 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:08.890 Cannot find device "nvmf_tgt_br2" 00:15:08.890 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@169 -- # true 00:15:08.890 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:08.890 Cannot find device "nvmf_br" 00:15:08.890 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@170 -- # true 00:15:08.890 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:08.890 Cannot find device "nvmf_init_if" 00:15:08.890 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@171 -- # true 00:15:08.890 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:08.890 Cannot find device "nvmf_init_if2" 00:15:08.890 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@172 -- # true 00:15:08.890 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:08.890 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:08.890 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@173 -- # true 00:15:08.890 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:08.890 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:08.890 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@174 -- # true 00:15:08.890 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:08.890 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:09.148 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:09.148 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:09.148 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:09.148 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:09.148 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:09.148 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:09.148 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:09.148 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:09.148 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:09.148 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:09.148 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:09.148 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:09.148 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:09.148 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:09.148 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:09.148 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:09.148 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:09.148 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:09.148 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:09.148 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:09.148 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:09.148 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:09.148 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:09.148 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:09.149 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:09.149 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:09.149 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:09.149 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:09.149 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:09.149 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:09.149 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:09.149 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:09.149 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:15:09.149 00:15:09.149 --- 10.0.0.3 ping statistics --- 00:15:09.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:09.149 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:15:09.149 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:09.149 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:09.149 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.070 ms 00:15:09.149 00:15:09.149 --- 10.0.0.4 ping statistics --- 00:15:09.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:09.149 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:15:09.149 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:09.149 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:09.149 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.016 ms 00:15:09.149 00:15:09.149 --- 10.0.0.1 ping statistics --- 00:15:09.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:09.149 rtt min/avg/max/mdev = 0.016/0.016/0.016/0.000 ms 00:15:09.149 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:09.149 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:09.149 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:15:09.149 00:15:09.149 --- 10.0.0.2 ping statistics --- 00:15:09.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:09.149 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:15:09.149 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:09.149 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@461 -- # return 0 00:15:09.149 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:09.149 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:09.149 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:09.149 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:09.149 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:09.149 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:09.149 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:09.149 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:15:09.149 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:09.149 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:09.149 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:09.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:09.149 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=77898 00:15:09.149 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 77898 00:15:09.149 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@833 -- # '[' -z 77898 ']' 00:15:09.149 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:09.149 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:09.149 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:09.149 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:09.149 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:09.149 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:09.407 [2024-11-06 14:19:36.813071] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:15:09.407 [2024-11-06 14:19:36.813183] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:09.407 [2024-11-06 14:19:36.971920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:09.665 [2024-11-06 14:19:37.093388] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:09.665 [2024-11-06 14:19:37.093443] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:09.665 [2024-11-06 14:19:37.093456] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:09.665 [2024-11-06 14:19:37.093464] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:09.665 [2024-11-06 14:19:37.093473] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:09.665 [2024-11-06 14:19:37.095339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:09.665 [2024-11-06 14:19:37.095786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.665 [2024-11-06 14:19:37.095795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:09.665 [2024-11-06 14:19:37.095824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:10.230 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:10.230 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@866 -- # return 0 00:15:10.230 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:10.230 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:10.230 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:10.230 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:10.230 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:10.230 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:15:10.230 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:10.230 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:15:10.230 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:15:10.488 "nvmf_tgt_1" 00:15:10.488 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:15:10.488 "nvmf_tgt_2" 00:15:10.488 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:10.488 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:15:10.488 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:15:10.488 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:15:10.746 true 00:15:10.746 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:15:10.746 true 00:15:10.746 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:10.746 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:15:10.746 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:15:10.746 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:15:10.746 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:15:10.746 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:10.746 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:15:11.003 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:11.003 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:15:11.003 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:11.003 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:11.003 rmmod nvme_tcp 00:15:11.003 rmmod nvme_fabrics 00:15:11.003 rmmod nvme_keyring 00:15:11.003 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:11.003 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:15:11.003 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:15:11.003 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 77898 ']' 00:15:11.003 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 77898 00:15:11.003 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@952 -- # '[' -z 77898 ']' 00:15:11.003 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # kill -0 77898 00:15:11.003 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # uname 00:15:11.003 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:11.003 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 77898 00:15:11.003 killing process with pid 77898 00:15:11.003 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:11.003 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:11.003 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@970 -- # echo 'killing process with pid 77898' 00:15:11.003 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@971 -- # kill 77898 00:15:11.003 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@976 -- # wait 77898 00:15:11.936 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:11.936 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:11.936 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:11.936 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:15:11.936 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:15:11.936 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:11.936 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:15:11.936 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:11.936 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:11.936 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:11.936 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:11.936 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:11.936 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:11.936 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:11.936 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:11.936 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:11.936 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:11.936 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:11.936 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:11.936 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:11.936 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:11.937 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:11.937 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:11.937 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:11.937 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:11.937 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:11.937 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@300 -- # return 0 00:15:11.937 ************************************ 00:15:11.937 END TEST nvmf_multitarget 00:15:11.937 ************************************ 00:15:11.937 00:15:11.937 real 0m3.235s 00:15:11.937 user 0m9.125s 00:15:11.937 sys 0m0.710s 00:15:11.937 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:11.937 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:11.937 14:19:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:15:11.937 14:19:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:11.937 14:19:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:11.937 14:19:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:11.937 ************************************ 00:15:11.937 START TEST nvmf_rpc 00:15:11.937 ************************************ 00:15:11.937 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:15:11.937 * Looking for test storage... 00:15:12.207 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:12.207 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:12.207 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:12.207 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:15:12.207 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:12.207 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:12.207 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:12.207 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:12.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.208 --rc genhtml_branch_coverage=1 00:15:12.208 --rc genhtml_function_coverage=1 00:15:12.208 --rc genhtml_legend=1 00:15:12.208 --rc geninfo_all_blocks=1 00:15:12.208 --rc geninfo_unexecuted_blocks=1 00:15:12.208 00:15:12.208 ' 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:12.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.208 --rc genhtml_branch_coverage=1 00:15:12.208 --rc genhtml_function_coverage=1 00:15:12.208 --rc genhtml_legend=1 00:15:12.208 --rc geninfo_all_blocks=1 00:15:12.208 --rc geninfo_unexecuted_blocks=1 00:15:12.208 00:15:12.208 ' 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:12.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.208 --rc genhtml_branch_coverage=1 00:15:12.208 --rc genhtml_function_coverage=1 00:15:12.208 --rc genhtml_legend=1 00:15:12.208 --rc geninfo_all_blocks=1 00:15:12.208 --rc geninfo_unexecuted_blocks=1 00:15:12.208 00:15:12.208 ' 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:12.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.208 --rc genhtml_branch_coverage=1 00:15:12.208 --rc genhtml_function_coverage=1 00:15:12.208 --rc genhtml_legend=1 00:15:12.208 --rc geninfo_all_blocks=1 00:15:12.208 --rc geninfo_unexecuted_blocks=1 00:15:12.208 00:15:12.208 ' 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:12.208 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:12.208 Cannot find device "nvmf_init_br" 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@162 -- # true 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:12.208 Cannot find device "nvmf_init_br2" 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@163 -- # true 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:12.208 Cannot find device "nvmf_tgt_br" 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@164 -- # true 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:12.208 Cannot find device "nvmf_tgt_br2" 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@165 -- # true 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:12.208 Cannot find device "nvmf_init_br" 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@166 -- # true 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:12.208 Cannot find device "nvmf_init_br2" 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@167 -- # true 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:12.208 Cannot find device "nvmf_tgt_br" 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@168 -- # true 00:15:12.208 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:12.209 Cannot find device "nvmf_tgt_br2" 00:15:12.209 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@169 -- # true 00:15:12.209 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:12.209 Cannot find device "nvmf_br" 00:15:12.209 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@170 -- # true 00:15:12.209 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:12.209 Cannot find device "nvmf_init_if" 00:15:12.209 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@171 -- # true 00:15:12.209 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:12.209 Cannot find device "nvmf_init_if2" 00:15:12.209 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@172 -- # true 00:15:12.209 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:12.209 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:12.209 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@173 -- # true 00:15:12.209 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:12.209 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:12.209 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@174 -- # true 00:15:12.209 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:12.209 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:12.209 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:12.209 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:12.209 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:12.209 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:12.209 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:12.466 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:12.466 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:12.466 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:12.466 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:12.466 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:12.466 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:12.466 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:12.466 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:12.466 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:12.466 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:12.466 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:12.466 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:12.466 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:12.466 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:12.466 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:12.466 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:12.466 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:12.466 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:12.466 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:12.466 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:12.466 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:12.466 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:12.466 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:12.466 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:12.466 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:12.466 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:12.466 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:12.466 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:15:12.466 00:15:12.466 --- 10.0.0.3 ping statistics --- 00:15:12.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.466 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:15:12.466 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:12.466 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:12.466 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.030 ms 00:15:12.466 00:15:12.466 --- 10.0.0.4 ping statistics --- 00:15:12.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.466 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:15:12.466 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:12.466 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:12.466 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.014 ms 00:15:12.466 00:15:12.466 --- 10.0.0.1 ping statistics --- 00:15:12.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.466 rtt min/avg/max/mdev = 0.014/0.014/0.014/0.000 ms 00:15:12.466 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:12.466 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:12.466 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.027 ms 00:15:12.466 00:15:12.466 --- 10.0.0.2 ping statistics --- 00:15:12.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.466 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:15:12.466 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:12.466 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@461 -- # return 0 00:15:12.466 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:12.466 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:12.466 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:12.466 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:12.466 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:12.466 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:12.466 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:12.466 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:15:12.466 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:12.466 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:12.466 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:12.466 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=78185 00:15:12.466 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 78185 00:15:12.466 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:12.466 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@833 -- # '[' -z 78185 ']' 00:15:12.466 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:12.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:12.466 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:12.466 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:12.466 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:12.466 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:12.466 [2024-11-06 14:19:40.042684] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:15:12.466 [2024-11-06 14:19:40.042778] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:12.724 [2024-11-06 14:19:40.197622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:12.724 [2024-11-06 14:19:40.303054] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:12.724 [2024-11-06 14:19:40.303103] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:12.724 [2024-11-06 14:19:40.303115] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:12.724 [2024-11-06 14:19:40.303122] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:12.724 [2024-11-06 14:19:40.303131] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:12.724 [2024-11-06 14:19:40.304551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:12.724 [2024-11-06 14:19:40.305014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:12.724 [2024-11-06 14:19:40.305035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:12.724 [2024-11-06 14:19:40.305229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:13.289 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:13.289 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@866 -- # return 0 00:15:13.289 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:13.289 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:13.289 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:13.546 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:13.546 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:15:13.546 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.546 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:13.546 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.546 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:15:13.546 "poll_groups": [ 00:15:13.546 { 00:15:13.546 "admin_qpairs": 0, 00:15:13.546 "completed_nvme_io": 0, 00:15:13.546 "current_admin_qpairs": 0, 00:15:13.546 "current_io_qpairs": 0, 00:15:13.546 "io_qpairs": 0, 00:15:13.546 "name": "nvmf_tgt_poll_group_000", 00:15:13.546 "pending_bdev_io": 0, 00:15:13.546 "transports": [] 00:15:13.546 }, 00:15:13.546 { 00:15:13.546 "admin_qpairs": 0, 00:15:13.546 "completed_nvme_io": 0, 00:15:13.546 "current_admin_qpairs": 0, 00:15:13.547 "current_io_qpairs": 0, 00:15:13.547 "io_qpairs": 0, 00:15:13.547 "name": "nvmf_tgt_poll_group_001", 00:15:13.547 "pending_bdev_io": 0, 00:15:13.547 "transports": [] 00:15:13.547 }, 00:15:13.547 { 00:15:13.547 "admin_qpairs": 0, 00:15:13.547 "completed_nvme_io": 0, 00:15:13.547 "current_admin_qpairs": 0, 00:15:13.547 "current_io_qpairs": 0, 00:15:13.547 "io_qpairs": 0, 00:15:13.547 "name": "nvmf_tgt_poll_group_002", 00:15:13.547 "pending_bdev_io": 0, 00:15:13.547 "transports": [] 00:15:13.547 }, 00:15:13.547 { 00:15:13.547 "admin_qpairs": 0, 00:15:13.547 "completed_nvme_io": 0, 00:15:13.547 "current_admin_qpairs": 0, 00:15:13.547 "current_io_qpairs": 0, 00:15:13.547 "io_qpairs": 0, 00:15:13.547 "name": "nvmf_tgt_poll_group_003", 00:15:13.547 "pending_bdev_io": 0, 00:15:13.547 "transports": [] 00:15:13.547 } 00:15:13.547 ], 00:15:13.547 "tick_rate": 2600000000 00:15:13.547 }' 00:15:13.547 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:15:13.547 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:15:13.547 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:15:13.547 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:15:13.547 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:15:13.547 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:15:13.547 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:15:13.547 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:13.547 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.547 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:13.547 [2024-11-06 14:19:41.029859] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:13.547 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.547 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:15:13.547 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.547 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:13.547 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.547 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:15:13.547 "poll_groups": [ 00:15:13.547 { 00:15:13.547 "admin_qpairs": 0, 00:15:13.547 "completed_nvme_io": 0, 00:15:13.547 "current_admin_qpairs": 0, 00:15:13.547 "current_io_qpairs": 0, 00:15:13.547 "io_qpairs": 0, 00:15:13.547 "name": "nvmf_tgt_poll_group_000", 00:15:13.547 "pending_bdev_io": 0, 00:15:13.547 "transports": [ 00:15:13.547 { 00:15:13.547 "trtype": "TCP" 00:15:13.547 } 00:15:13.547 ] 00:15:13.547 }, 00:15:13.547 { 00:15:13.547 "admin_qpairs": 0, 00:15:13.547 "completed_nvme_io": 0, 00:15:13.547 "current_admin_qpairs": 0, 00:15:13.547 "current_io_qpairs": 0, 00:15:13.547 "io_qpairs": 0, 00:15:13.547 "name": "nvmf_tgt_poll_group_001", 00:15:13.547 "pending_bdev_io": 0, 00:15:13.547 "transports": [ 00:15:13.547 { 00:15:13.547 "trtype": "TCP" 00:15:13.547 } 00:15:13.547 ] 00:15:13.547 }, 00:15:13.547 { 00:15:13.547 "admin_qpairs": 0, 00:15:13.547 "completed_nvme_io": 0, 00:15:13.547 "current_admin_qpairs": 0, 00:15:13.547 "current_io_qpairs": 0, 00:15:13.547 "io_qpairs": 0, 00:15:13.547 "name": "nvmf_tgt_poll_group_002", 00:15:13.547 "pending_bdev_io": 0, 00:15:13.547 "transports": [ 00:15:13.547 { 00:15:13.547 "trtype": "TCP" 00:15:13.547 } 00:15:13.547 ] 00:15:13.547 }, 00:15:13.547 { 00:15:13.547 "admin_qpairs": 0, 00:15:13.547 "completed_nvme_io": 0, 00:15:13.547 "current_admin_qpairs": 0, 00:15:13.547 "current_io_qpairs": 0, 00:15:13.547 "io_qpairs": 0, 00:15:13.547 "name": "nvmf_tgt_poll_group_003", 00:15:13.547 "pending_bdev_io": 0, 00:15:13.547 "transports": [ 00:15:13.547 { 00:15:13.547 "trtype": "TCP" 00:15:13.547 } 00:15:13.547 ] 00:15:13.547 } 00:15:13.547 ], 00:15:13.547 "tick_rate": 2600000000 00:15:13.547 }' 00:15:13.547 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:15:13.547 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:15:13.547 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:15:13.547 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:13.547 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:15:13.547 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:15:13.547 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:15:13.547 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:15:13.547 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:13.547 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:15:13.547 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:15:13.547 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:15:13.547 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:15:13.547 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:13.547 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.547 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:13.805 Malloc1 00:15:13.805 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.805 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:13.805 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.805 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:13.805 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.805 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:13.805 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.805 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:13.805 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.805 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:15:13.805 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.805 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:13.805 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.805 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:13.805 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.805 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:13.805 [2024-11-06 14:19:41.221874] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:13.805 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.805 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid=9ca3beaa-4765-4780-96a0-c04fd97e7435 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -a 10.0.0.3 -s 4420 00:15:13.805 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:15:13.805 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid=9ca3beaa-4765-4780-96a0-c04fd97e7435 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -a 10.0.0.3 -s 4420 00:15:13.805 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:15:13.805 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:13.805 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:15:13.805 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:13.805 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:15:13.805 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:13.805 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:15:13.805 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:15:13.805 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid=9ca3beaa-4765-4780-96a0-c04fd97e7435 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -a 10.0.0.3 -s 4420 00:15:13.805 [2024-11-06 14:19:41.250617] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435' 00:15:13.805 Failed to write to /dev/nvme-fabrics: Input/output error 00:15:13.805 could not add new controller: failed to write to nvme-fabrics device 00:15:13.805 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:15:13.805 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:13.805 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:13.805 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:13.805 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:15:13.805 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.805 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:13.805 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.805 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid=9ca3beaa-4765-4780-96a0-c04fd97e7435 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:15:13.805 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:15:13.805 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:15:13.805 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:15:13.805 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:15:13.805 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:15:16.411 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:15:16.411 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:15:16.411 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:15:16.411 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:15:16.411 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:15:16.411 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:15:16.411 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:16.411 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:16.411 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:16.411 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:15:16.411 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:15:16.411 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:16.411 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:15:16.411 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:16.411 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:15:16.411 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:15:16.411 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.411 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:16.411 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.411 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid=9ca3beaa-4765-4780-96a0-c04fd97e7435 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:15:16.411 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:15:16.411 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid=9ca3beaa-4765-4780-96a0-c04fd97e7435 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:15:16.411 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:15:16.411 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:16.411 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:15:16.411 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:16.411 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:15:16.411 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:16.411 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:15:16.411 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:15:16.411 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid=9ca3beaa-4765-4780-96a0-c04fd97e7435 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:15:16.411 [2024-11-06 14:19:43.521595] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435' 00:15:16.411 Failed to write to /dev/nvme-fabrics: Input/output error 00:15:16.411 could not add new controller: failed to write to nvme-fabrics device 00:15:16.411 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:15:16.411 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:16.411 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:16.411 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:16.411 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:15:16.411 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.411 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:16.411 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.412 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid=9ca3beaa-4765-4780-96a0-c04fd97e7435 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:15:16.412 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:15:16.412 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:15:16.412 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:15:16.412 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:15:16.412 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:15:18.309 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:15:18.309 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:15:18.309 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:15:18.309 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:15:18.309 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:15:18.309 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:15:18.309 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:18.309 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:18.309 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:18.309 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:15:18.309 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:18.309 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:15:18.309 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:15:18.309 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:18.309 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:15:18.309 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:18.309 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.309 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:18.309 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.309 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:15:18.309 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:18.309 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:18.309 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.309 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:18.309 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.309 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:18.309 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.309 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:18.309 [2024-11-06 14:19:45.800857] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:18.309 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.309 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:18.309 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.309 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:18.309 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.309 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:18.309 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.309 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:18.309 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.309 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid=9ca3beaa-4765-4780-96a0-c04fd97e7435 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:15:18.567 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:18.567 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:15:18.567 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:15:18.567 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:15:18.567 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:15:20.483 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:15:20.483 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:15:20.483 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:15:20.483 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:15:20.483 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:15:20.483 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:15:20.483 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:20.483 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:20.483 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:20.483 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:15:20.483 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:15:20.483 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:20.483 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:15:20.483 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:20.483 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:15:20.483 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:20.483 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.483 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.483 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.483 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:20.483 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.483 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.483 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.483 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:20.483 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:20.483 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.483 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.483 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.483 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:20.483 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.483 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.483 [2024-11-06 14:19:48.100511] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:20.483 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.483 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:20.483 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.483 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.484 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.484 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:20.484 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.484 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.741 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.741 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid=9ca3beaa-4765-4780-96a0-c04fd97e7435 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:15:20.741 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:20.741 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:15:20.741 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:15:20.741 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:15:20.741 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:15:23.266 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:15:23.266 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:15:23.266 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:15:23.266 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:15:23.266 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:15:23.266 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:15:23.266 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:23.266 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:23.266 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:23.266 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:15:23.267 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:15:23.267 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:23.267 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:15:23.267 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:23.267 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:15:23.267 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:23.267 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.267 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:23.267 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.267 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:23.267 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.267 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:23.267 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.267 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:23.267 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:23.267 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.267 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:23.267 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.267 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:23.267 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.267 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:23.267 [2024-11-06 14:19:50.493479] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:23.267 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.267 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:23.267 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.267 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:23.267 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.267 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:23.267 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.267 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:23.267 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.267 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid=9ca3beaa-4765-4780-96a0-c04fd97e7435 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:15:23.267 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:23.267 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:15:23.267 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:15:23.267 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:15:23.267 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:15:25.163 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:15:25.163 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:15:25.163 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:15:25.163 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:15:25.163 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:15:25.163 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:15:25.163 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:25.163 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:25.163 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:25.163 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:15:25.163 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:15:25.163 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:25.163 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:25.164 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:15:25.164 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:15:25.164 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:25.164 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.164 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:25.164 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.164 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:25.164 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.164 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:25.164 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.164 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:25.164 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:25.164 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.164 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:25.164 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.164 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:25.164 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.164 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:25.164 [2024-11-06 14:19:52.785752] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:25.164 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.164 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:25.164 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.164 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:25.420 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.420 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:25.420 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.420 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:25.420 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.420 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid=9ca3beaa-4765-4780-96a0-c04fd97e7435 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:15:25.420 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:25.420 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:15:25.420 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:15:25.420 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:15:25.420 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:15:27.944 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:15:27.944 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:15:27.944 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:15:27.944 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:15:27.944 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:15:27.944 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:15:27.944 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:27.944 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:27.944 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:27.944 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:15:27.944 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:15:27.944 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:27.944 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:27.944 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:15:27.944 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:15:27.944 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:27.944 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.944 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:27.944 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.944 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:27.944 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.944 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:27.944 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.944 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:27.944 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:27.944 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.944 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:27.944 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.944 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:27.944 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.944 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:27.944 [2024-11-06 14:19:55.070159] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:27.944 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.944 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:27.944 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.944 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:27.944 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.945 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:27.945 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.945 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:27.945 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.945 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid=9ca3beaa-4765-4780-96a0-c04fd97e7435 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:15:27.945 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:27.945 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:15:27.945 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:15:27.945 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:15:27.945 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:29.851 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:29.851 [2024-11-06 14:19:57.366459] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:29.851 [2024-11-06 14:19:57.414323] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:29.851 [2024-11-06 14:19:57.462360] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.851 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:30.111 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.111 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:30.111 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.111 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:30.111 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.111 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:30.111 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.111 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:30.111 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.111 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:30.111 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:30.111 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.111 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:30.111 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.111 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:30.111 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.111 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:30.111 [2024-11-06 14:19:57.518419] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:30.111 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.111 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:30.111 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.111 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:30.111 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.111 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:30.111 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.111 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:30.111 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.111 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:30.111 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.111 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:30.111 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.111 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:30.111 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.111 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:30.111 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.111 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:30.111 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:30.111 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.111 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:30.111 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.111 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:30.111 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.111 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:30.111 [2024-11-06 14:19:57.566482] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:30.111 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.111 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:30.111 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.111 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:30.111 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.111 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:30.111 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.111 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:30.111 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.111 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:30.111 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.111 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:30.111 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.111 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:30.111 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.111 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:30.111 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.111 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:15:30.111 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.111 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:30.111 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.111 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:15:30.111 "poll_groups": [ 00:15:30.112 { 00:15:30.112 "admin_qpairs": 2, 00:15:30.112 "completed_nvme_io": 116, 00:15:30.112 "current_admin_qpairs": 0, 00:15:30.112 "current_io_qpairs": 0, 00:15:30.112 "io_qpairs": 16, 00:15:30.112 "name": "nvmf_tgt_poll_group_000", 00:15:30.112 "pending_bdev_io": 0, 00:15:30.112 "transports": [ 00:15:30.112 { 00:15:30.112 "trtype": "TCP" 00:15:30.112 } 00:15:30.112 ] 00:15:30.112 }, 00:15:30.112 { 00:15:30.112 "admin_qpairs": 3, 00:15:30.112 "completed_nvme_io": 165, 00:15:30.112 "current_admin_qpairs": 0, 00:15:30.112 "current_io_qpairs": 0, 00:15:30.112 "io_qpairs": 17, 00:15:30.112 "name": "nvmf_tgt_poll_group_001", 00:15:30.112 "pending_bdev_io": 0, 00:15:30.112 "transports": [ 00:15:30.112 { 00:15:30.112 "trtype": "TCP" 00:15:30.112 } 00:15:30.112 ] 00:15:30.112 }, 00:15:30.112 { 00:15:30.112 "admin_qpairs": 1, 00:15:30.112 "completed_nvme_io": 69, 00:15:30.112 "current_admin_qpairs": 0, 00:15:30.112 "current_io_qpairs": 0, 00:15:30.112 "io_qpairs": 19, 00:15:30.112 "name": "nvmf_tgt_poll_group_002", 00:15:30.112 "pending_bdev_io": 0, 00:15:30.112 "transports": [ 00:15:30.112 { 00:15:30.112 "trtype": "TCP" 00:15:30.112 } 00:15:30.112 ] 00:15:30.112 }, 00:15:30.112 { 00:15:30.112 "admin_qpairs": 1, 00:15:30.112 "completed_nvme_io": 70, 00:15:30.112 "current_admin_qpairs": 0, 00:15:30.112 "current_io_qpairs": 0, 00:15:30.112 "io_qpairs": 18, 00:15:30.112 "name": "nvmf_tgt_poll_group_003", 00:15:30.112 "pending_bdev_io": 0, 00:15:30.112 "transports": [ 00:15:30.112 { 00:15:30.112 "trtype": "TCP" 00:15:30.112 } 00:15:30.112 ] 00:15:30.112 } 00:15:30.112 ], 00:15:30.112 "tick_rate": 2600000000 00:15:30.112 }' 00:15:30.112 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:15:30.112 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:15:30.112 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:15:30.112 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:30.112 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:15:30.112 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:15:30.112 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:15:30.112 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:15:30.112 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:30.112 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:15:30.112 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:15:30.112 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:15:30.112 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:15:30.112 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:30.112 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:15:30.112 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:30.112 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:15:30.112 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:30.112 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:30.112 rmmod nvme_tcp 00:15:30.112 rmmod nvme_fabrics 00:15:30.112 rmmod nvme_keyring 00:15:30.112 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:30.369 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:15:30.369 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:15:30.369 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 78185 ']' 00:15:30.369 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 78185 00:15:30.369 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@952 -- # '[' -z 78185 ']' 00:15:30.369 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # kill -0 78185 00:15:30.369 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # uname 00:15:30.369 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:30.369 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 78185 00:15:30.369 killing process with pid 78185 00:15:30.369 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:30.369 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:30.369 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 78185' 00:15:30.369 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@971 -- # kill 78185 00:15:30.369 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@976 -- # wait 78185 00:15:30.933 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:30.933 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:30.933 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:30.933 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:15:30.933 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:15:30.933 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:15:30.933 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:30.933 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:30.933 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:30.933 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:30.933 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:30.933 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:30.933 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:31.190 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:31.190 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:31.190 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:31.190 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:31.190 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:31.190 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:31.190 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:31.190 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:31.190 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:31.190 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:31.190 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:31.190 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:31.190 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:31.190 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@300 -- # return 0 00:15:31.190 00:15:31.190 real 0m19.246s 00:15:31.190 user 1m11.488s 00:15:31.190 sys 0m2.121s 00:15:31.190 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:31.190 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:31.190 ************************************ 00:15:31.190 END TEST nvmf_rpc 00:15:31.190 ************************************ 00:15:31.190 14:19:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:15:31.190 14:19:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:31.191 14:19:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:31.191 14:19:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:31.191 ************************************ 00:15:31.191 START TEST nvmf_invalid 00:15:31.191 ************************************ 00:15:31.191 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:15:31.449 * Looking for test storage... 00:15:31.449 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:31.449 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:31.449 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lcov --version 00:15:31.449 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:31.449 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:31.449 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:31.449 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:31.449 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:31.449 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:15:31.449 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:15:31.449 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:15:31.449 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:15:31.449 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:15:31.449 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:15:31.449 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:15:31.449 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:31.449 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:15:31.449 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:15:31.449 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:31.449 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:31.449 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:15:31.449 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:15:31.449 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:31.449 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:15:31.449 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:15:31.449 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:15:31.449 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:15:31.449 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:31.449 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:15:31.449 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:15:31.449 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:31.449 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:31.449 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:15:31.449 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:31.449 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:31.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:31.449 --rc genhtml_branch_coverage=1 00:15:31.449 --rc genhtml_function_coverage=1 00:15:31.449 --rc genhtml_legend=1 00:15:31.449 --rc geninfo_all_blocks=1 00:15:31.449 --rc geninfo_unexecuted_blocks=1 00:15:31.449 00:15:31.449 ' 00:15:31.449 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:31.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:31.449 --rc genhtml_branch_coverage=1 00:15:31.449 --rc genhtml_function_coverage=1 00:15:31.449 --rc genhtml_legend=1 00:15:31.449 --rc geninfo_all_blocks=1 00:15:31.449 --rc geninfo_unexecuted_blocks=1 00:15:31.449 00:15:31.449 ' 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:31.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:31.450 --rc genhtml_branch_coverage=1 00:15:31.450 --rc genhtml_function_coverage=1 00:15:31.450 --rc genhtml_legend=1 00:15:31.450 --rc geninfo_all_blocks=1 00:15:31.450 --rc geninfo_unexecuted_blocks=1 00:15:31.450 00:15:31.450 ' 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:31.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:31.450 --rc genhtml_branch_coverage=1 00:15:31.450 --rc genhtml_function_coverage=1 00:15:31.450 --rc genhtml_legend=1 00:15:31.450 --rc geninfo_all_blocks=1 00:15:31.450 --rc geninfo_unexecuted_blocks=1 00:15:31.450 00:15:31.450 ' 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:31.450 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:31.450 Cannot find device "nvmf_init_br" 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@162 -- # true 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:31.450 Cannot find device "nvmf_init_br2" 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@163 -- # true 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:31.450 Cannot find device "nvmf_tgt_br" 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@164 -- # true 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:31.450 Cannot find device "nvmf_tgt_br2" 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@165 -- # true 00:15:31.450 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:31.451 Cannot find device "nvmf_init_br" 00:15:31.451 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@166 -- # true 00:15:31.451 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:31.451 Cannot find device "nvmf_init_br2" 00:15:31.451 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@167 -- # true 00:15:31.451 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:31.451 Cannot find device "nvmf_tgt_br" 00:15:31.451 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@168 -- # true 00:15:31.451 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:31.451 Cannot find device "nvmf_tgt_br2" 00:15:31.451 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@169 -- # true 00:15:31.451 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:31.451 Cannot find device "nvmf_br" 00:15:31.451 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@170 -- # true 00:15:31.451 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:31.451 Cannot find device "nvmf_init_if" 00:15:31.451 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@171 -- # true 00:15:31.451 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:31.451 Cannot find device "nvmf_init_if2" 00:15:31.451 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@172 -- # true 00:15:31.451 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:31.451 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:31.451 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@173 -- # true 00:15:31.451 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:31.451 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:31.451 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@174 -- # true 00:15:31.451 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:31.451 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:31.451 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:31.710 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:31.710 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:31.710 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:31.710 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:31.710 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:31.710 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:31.710 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:31.710 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:31.710 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:31.710 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:31.710 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:31.710 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:31.710 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:31.710 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:31.710 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:31.710 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:31.710 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:31.710 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:31.710 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:31.710 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:31.710 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:31.710 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:31.710 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:31.710 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:31.710 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:31.710 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:31.710 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:31.710 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:31.710 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:31.710 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:31.710 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:31.710 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:15:31.710 00:15:31.710 --- 10.0.0.3 ping statistics --- 00:15:31.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:31.710 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:15:31.710 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:31.710 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:31.710 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.030 ms 00:15:31.710 00:15:31.710 --- 10.0.0.4 ping statistics --- 00:15:31.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:31.710 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:15:31.710 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:31.710 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:31.710 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:15:31.710 00:15:31.710 --- 10.0.0.1 ping statistics --- 00:15:31.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:31.710 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:15:31.710 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:31.710 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:31.710 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:15:31.710 00:15:31.710 --- 10.0.0.2 ping statistics --- 00:15:31.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:31.710 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:15:31.710 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:31.710 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@461 -- # return 0 00:15:31.710 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:31.710 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:31.710 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:31.710 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:31.710 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:31.710 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:31.710 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:31.710 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:15:31.710 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:31.710 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:31.710 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:31.710 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=78750 00:15:31.710 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 78750 00:15:31.710 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@833 -- # '[' -z 78750 ']' 00:15:31.710 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:31.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:31.710 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:31.710 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:31.710 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:31.710 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:31.710 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:31.710 [2024-11-06 14:19:59.332290] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:15:31.710 [2024-11-06 14:19:59.332402] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:31.969 [2024-11-06 14:19:59.491387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:31.969 [2024-11-06 14:19:59.595263] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:31.969 [2024-11-06 14:19:59.595304] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:31.969 [2024-11-06 14:19:59.595316] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:31.969 [2024-11-06 14:19:59.595324] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:31.969 [2024-11-06 14:19:59.595333] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:31.969 [2024-11-06 14:19:59.596732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:31.969 [2024-11-06 14:19:59.597157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:31.969 [2024-11-06 14:19:59.597211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:31.969 [2024-11-06 14:19:59.597243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:32.900 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:32.900 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@866 -- # return 0 00:15:32.900 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:32.900 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:32.900 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:32.900 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:32.900 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:32.900 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode23332 00:15:32.900 [2024-11-06 14:20:00.392433] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:15:32.900 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='2024/11/06 14:20:00 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode23332 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:15:32.900 request: 00:15:32.900 { 00:15:32.900 "method": "nvmf_create_subsystem", 00:15:32.900 "params": { 00:15:32.900 "nqn": "nqn.2016-06.io.spdk:cnode23332", 00:15:32.900 "tgt_name": "foobar" 00:15:32.900 } 00:15:32.900 } 00:15:32.900 Got JSON-RPC error response 00:15:32.900 GoRPCClient: error on JSON-RPC call' 00:15:32.900 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ 2024/11/06 14:20:00 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode23332 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:15:32.900 request: 00:15:32.900 { 00:15:32.900 "method": "nvmf_create_subsystem", 00:15:32.900 "params": { 00:15:32.900 "nqn": "nqn.2016-06.io.spdk:cnode23332", 00:15:32.900 "tgt_name": "foobar" 00:15:32.900 } 00:15:32.900 } 00:15:32.900 Got JSON-RPC error response 00:15:32.900 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:15:32.900 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:15:32.900 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode20059 00:15:33.158 [2024-11-06 14:20:00.604640] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20059: invalid serial number 'SPDKISFASTANDAWESOME' 00:15:33.158 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='2024/11/06 14:20:00 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode20059 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:15:33.158 request: 00:15:33.158 { 00:15:33.158 "method": "nvmf_create_subsystem", 00:15:33.158 "params": { 00:15:33.158 "nqn": "nqn.2016-06.io.spdk:cnode20059", 00:15:33.158 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:15:33.158 } 00:15:33.158 } 00:15:33.158 Got JSON-RPC error response 00:15:33.158 GoRPCClient: error on JSON-RPC call' 00:15:33.158 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ 2024/11/06 14:20:00 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode20059 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:15:33.158 request: 00:15:33.158 { 00:15:33.158 "method": "nvmf_create_subsystem", 00:15:33.158 "params": { 00:15:33.158 "nqn": "nqn.2016-06.io.spdk:cnode20059", 00:15:33.158 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:15:33.158 } 00:15:33.158 } 00:15:33.158 Got JSON-RPC error response 00:15:33.158 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:33.158 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:15:33.158 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode4561 00:15:33.416 [2024-11-06 14:20:00.816854] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4561: invalid model number 'SPDK_Controller' 00:15:33.416 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='2024/11/06 14:20:00 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode4561], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:15:33.416 request: 00:15:33.416 { 00:15:33.416 "method": "nvmf_create_subsystem", 00:15:33.416 "params": { 00:15:33.416 "nqn": "nqn.2016-06.io.spdk:cnode4561", 00:15:33.416 "model_number": "SPDK_Controller\u001f" 00:15:33.416 } 00:15:33.416 } 00:15:33.416 Got JSON-RPC error response 00:15:33.416 GoRPCClient: error on JSON-RPC call' 00:15:33.416 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ 2024/11/06 14:20:00 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode4561], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:15:33.416 request: 00:15:33.416 { 00:15:33.416 "method": "nvmf_create_subsystem", 00:15:33.416 "params": { 00:15:33.416 "nqn": "nqn.2016-06.io.spdk:cnode4561", 00:15:33.416 "model_number": "SPDK_Controller\u001f" 00:15:33.416 } 00:15:33.416 } 00:15:33.416 Got JSON-RPC error response 00:15:33.416 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:15:33.416 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:15:33.416 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:15:33.416 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:33.416 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:15:33.416 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:15:33.416 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:33.416 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:33.416 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:15:33.416 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:15:33.416 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:15:33.416 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:33.416 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:33.416 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:15:33.416 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:15:33.416 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:15:33.416 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:33.416 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:33.416 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:15:33.416 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ : == \- ]] 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo ':2mdi;.2px-}+I%fgLeU^' 00:15:33.417 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s ':2mdi;.2px-}+I%fgLeU^' nqn.2016-06.io.spdk:cnode14707 00:15:33.676 [2024-11-06 14:20:01.117158] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14707: invalid serial number ':2mdi;.2px-}+I%fgLeU^' 00:15:33.676 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='2024/11/06 14:20:01 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode14707 serial_number::2mdi;.2px-}+I%fgLeU^], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN :2mdi;.2px-}+I%fgLeU^ 00:15:33.676 request: 00:15:33.676 { 00:15:33.676 "method": "nvmf_create_subsystem", 00:15:33.676 "params": { 00:15:33.676 "nqn": "nqn.2016-06.io.spdk:cnode14707", 00:15:33.676 "serial_number": ":2mdi;.2px-}+I%fgLeU^" 00:15:33.676 } 00:15:33.676 } 00:15:33.676 Got JSON-RPC error response 00:15:33.676 GoRPCClient: error on JSON-RPC call' 00:15:33.676 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ 2024/11/06 14:20:01 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode14707 serial_number::2mdi;.2px-}+I%fgLeU^], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN :2mdi;.2px-}+I%fgLeU^ 00:15:33.676 request: 00:15:33.676 { 00:15:33.676 "method": "nvmf_create_subsystem", 00:15:33.676 "params": { 00:15:33.676 "nqn": "nqn.2016-06.io.spdk:cnode14707", 00:15:33.676 "serial_number": ":2mdi;.2px-}+I%fgLeU^" 00:15:33.676 } 00:15:33.676 } 00:15:33.676 Got JSON-RPC error response 00:15:33.676 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:33.676 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:15:33.676 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:15:33.676 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:33.676 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:15:33.676 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:15:33.676 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:33.676 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:33.676 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:15:33.676 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:15:33.676 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:15:33.677 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[  == \- ]] 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'm<|h]fy#"+{b#ERUR|<40cc$KW}jb8;rp"`@?R^$' 00:15:33.678 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d 'm<|h]fy#"+{b#ERUR|<40cc$KW}jb8;rp"`@?R^$' nqn.2016-06.io.spdk:cnode12495 00:15:33.935 [2024-11-06 14:20:01.457501] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12495: invalid model number 'm<|h]fy#"+{b#ERUR|<40cc$KW}jb8;rp"`@?R^$' 00:15:33.935 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='2024/11/06 14:20:01 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:m<|h]fy#"+{b#ERUR|<40cc$KW}jb8;rp"`@?R^$ nqn:nqn.2016-06.io.spdk:cnode12495], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN m<|h]fy#"+{b#ERUR|<40cc$KW}jb8;rp"`@?R^$ 00:15:33.935 request: 00:15:33.935 { 00:15:33.935 "method": "nvmf_create_subsystem", 00:15:33.935 "params": { 00:15:33.935 "nqn": "nqn.2016-06.io.spdk:cnode12495", 00:15:33.935 "model_number": "\u007fm<|h]fy#\"+{b#ERUR|<40cc$KW}jb8;rp\"`@?R^$" 00:15:33.935 } 00:15:33.935 } 00:15:33.935 Got JSON-RPC error response 00:15:33.935 GoRPCClient: error on JSON-RPC call' 00:15:33.935 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ 2024/11/06 14:20:01 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:m<|h]fy#"+{b#ERUR|<40cc$KW}jb8;rp"`@?R^$ nqn:nqn.2016-06.io.spdk:cnode12495], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN m<|h]fy#"+{b#ERUR|<40cc$KW}jb8;rp"`@?R^$ 00:15:33.935 request: 00:15:33.935 { 00:15:33.935 "method": "nvmf_create_subsystem", 00:15:33.935 "params": { 00:15:33.935 "nqn": "nqn.2016-06.io.spdk:cnode12495", 00:15:33.935 "model_number": "\u007fm<|h]fy#\"+{b#ERUR|<40cc$KW}jb8;rp\"`@?R^$" 00:15:33.935 } 00:15:33.935 } 00:15:33.935 Got JSON-RPC error response 00:15:33.935 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:15:33.935 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:15:34.192 [2024-11-06 14:20:01.665744] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:34.192 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:15:34.450 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:15:34.450 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:15:34.450 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:15:34.450 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:15:34.450 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:15:34.450 [2024-11-06 14:20:02.082566] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:15:34.708 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='2024/11/06 14:20:02 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:15:34.708 request: 00:15:34.708 { 00:15:34.708 "method": "nvmf_subsystem_remove_listener", 00:15:34.708 "params": { 00:15:34.708 "nqn": "nqn.2016-06.io.spdk:cnode", 00:15:34.708 "listen_address": { 00:15:34.708 "trtype": "tcp", 00:15:34.708 "traddr": "", 00:15:34.708 "trsvcid": "4421" 00:15:34.708 } 00:15:34.708 } 00:15:34.708 } 00:15:34.708 Got JSON-RPC error response 00:15:34.708 GoRPCClient: error on JSON-RPC call' 00:15:34.708 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ 2024/11/06 14:20:02 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:15:34.708 request: 00:15:34.708 { 00:15:34.708 "method": "nvmf_subsystem_remove_listener", 00:15:34.708 "params": { 00:15:34.708 "nqn": "nqn.2016-06.io.spdk:cnode", 00:15:34.708 "listen_address": { 00:15:34.708 "trtype": "tcp", 00:15:34.708 "traddr": "", 00:15:34.708 "trsvcid": "4421" 00:15:34.708 } 00:15:34.708 } 00:15:34.708 } 00:15:34.708 Got JSON-RPC error response 00:15:34.708 GoRPCClient: error on JSON-RPC call != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:15:34.708 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27279 -i 0 00:15:34.708 [2024-11-06 14:20:02.294756] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27279: invalid cntlid range [0-65519] 00:15:34.708 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='2024/11/06 14:20:02 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode27279], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:15:34.708 request: 00:15:34.708 { 00:15:34.708 "method": "nvmf_create_subsystem", 00:15:34.708 "params": { 00:15:34.708 "nqn": "nqn.2016-06.io.spdk:cnode27279", 00:15:34.708 "min_cntlid": 0 00:15:34.708 } 00:15:34.708 } 00:15:34.708 Got JSON-RPC error response 00:15:34.708 GoRPCClient: error on JSON-RPC call' 00:15:34.708 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ 2024/11/06 14:20:02 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode27279], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:15:34.708 request: 00:15:34.708 { 00:15:34.708 "method": "nvmf_create_subsystem", 00:15:34.708 "params": { 00:15:34.708 "nqn": "nqn.2016-06.io.spdk:cnode27279", 00:15:34.708 "min_cntlid": 0 00:15:34.708 } 00:15:34.708 } 00:15:34.708 Got JSON-RPC error response 00:15:34.708 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:34.708 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode28252 -i 65520 00:15:34.965 [2024-11-06 14:20:02.498973] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28252: invalid cntlid range [65520-65519] 00:15:34.965 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='2024/11/06 14:20:02 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode28252], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:15:34.965 request: 00:15:34.965 { 00:15:34.965 "method": "nvmf_create_subsystem", 00:15:34.965 "params": { 00:15:34.965 "nqn": "nqn.2016-06.io.spdk:cnode28252", 00:15:34.965 "min_cntlid": 65520 00:15:34.965 } 00:15:34.965 } 00:15:34.965 Got JSON-RPC error response 00:15:34.965 GoRPCClient: error on JSON-RPC call' 00:15:34.965 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ 2024/11/06 14:20:02 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode28252], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:15:34.965 request: 00:15:34.965 { 00:15:34.965 "method": "nvmf_create_subsystem", 00:15:34.965 "params": { 00:15:34.965 "nqn": "nqn.2016-06.io.spdk:cnode28252", 00:15:34.965 "min_cntlid": 65520 00:15:34.965 } 00:15:34.965 } 00:15:34.965 Got JSON-RPC error response 00:15:34.965 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:34.965 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2361 -I 0 00:15:35.222 [2024-11-06 14:20:02.711178] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2361: invalid cntlid range [1-0] 00:15:35.222 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='2024/11/06 14:20:02 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode2361], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:15:35.222 request: 00:15:35.222 { 00:15:35.222 "method": "nvmf_create_subsystem", 00:15:35.222 "params": { 00:15:35.222 "nqn": "nqn.2016-06.io.spdk:cnode2361", 00:15:35.222 "max_cntlid": 0 00:15:35.222 } 00:15:35.222 } 00:15:35.222 Got JSON-RPC error response 00:15:35.222 GoRPCClient: error on JSON-RPC call' 00:15:35.222 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ 2024/11/06 14:20:02 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode2361], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:15:35.222 request: 00:15:35.222 { 00:15:35.222 "method": "nvmf_create_subsystem", 00:15:35.222 "params": { 00:15:35.222 "nqn": "nqn.2016-06.io.spdk:cnode2361", 00:15:35.222 "max_cntlid": 0 00:15:35.222 } 00:15:35.222 } 00:15:35.222 Got JSON-RPC error response 00:15:35.222 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:35.222 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode29853 -I 65520 00:15:35.480 [2024-11-06 14:20:02.919410] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29853: invalid cntlid range [1-65520] 00:15:35.480 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='2024/11/06 14:20:02 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode29853], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:15:35.480 request: 00:15:35.480 { 00:15:35.480 "method": "nvmf_create_subsystem", 00:15:35.480 "params": { 00:15:35.480 "nqn": "nqn.2016-06.io.spdk:cnode29853", 00:15:35.480 "max_cntlid": 65520 00:15:35.480 } 00:15:35.480 } 00:15:35.480 Got JSON-RPC error response 00:15:35.480 GoRPCClient: error on JSON-RPC call' 00:15:35.480 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ 2024/11/06 14:20:02 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode29853], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:15:35.480 request: 00:15:35.480 { 00:15:35.480 "method": "nvmf_create_subsystem", 00:15:35.480 "params": { 00:15:35.480 "nqn": "nqn.2016-06.io.spdk:cnode29853", 00:15:35.480 "max_cntlid": 65520 00:15:35.480 } 00:15:35.480 } 00:15:35.480 Got JSON-RPC error response 00:15:35.480 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:35.480 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4450 -i 6 -I 5 00:15:35.738 [2024-11-06 14:20:03.127638] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4450: invalid cntlid range [6-5] 00:15:35.738 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='2024/11/06 14:20:03 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode4450], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:15:35.738 request: 00:15:35.738 { 00:15:35.738 "method": "nvmf_create_subsystem", 00:15:35.738 "params": { 00:15:35.738 "nqn": "nqn.2016-06.io.spdk:cnode4450", 00:15:35.738 "min_cntlid": 6, 00:15:35.738 "max_cntlid": 5 00:15:35.738 } 00:15:35.738 } 00:15:35.738 Got JSON-RPC error response 00:15:35.738 GoRPCClient: error on JSON-RPC call' 00:15:35.738 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ 2024/11/06 14:20:03 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode4450], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:15:35.738 request: 00:15:35.738 { 00:15:35.738 "method": "nvmf_create_subsystem", 00:15:35.738 "params": { 00:15:35.738 "nqn": "nqn.2016-06.io.spdk:cnode4450", 00:15:35.738 "min_cntlid": 6, 00:15:35.738 "max_cntlid": 5 00:15:35.738 } 00:15:35.738 } 00:15:35.738 Got JSON-RPC error response 00:15:35.738 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:35.738 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:15:35.738 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:15:35.738 { 00:15:35.738 "name": "foobar", 00:15:35.738 "method": "nvmf_delete_target", 00:15:35.738 "req_id": 1 00:15:35.738 } 00:15:35.738 Got JSON-RPC error response 00:15:35.738 response: 00:15:35.738 { 00:15:35.738 "code": -32602, 00:15:35.738 "message": "The specified target doesn'\''t exist, cannot delete it." 00:15:35.738 }' 00:15:35.738 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:15:35.738 { 00:15:35.738 "name": "foobar", 00:15:35.738 "method": "nvmf_delete_target", 00:15:35.738 "req_id": 1 00:15:35.738 } 00:15:35.738 Got JSON-RPC error response 00:15:35.738 response: 00:15:35.738 { 00:15:35.738 "code": -32602, 00:15:35.738 "message": "The specified target doesn't exist, cannot delete it." 00:15:35.738 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:15:35.738 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:15:35.738 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:15:35.738 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:35.738 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:15:35.738 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:35.738 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:15:35.738 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:35.738 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:35.738 rmmod nvme_tcp 00:15:35.738 rmmod nvme_fabrics 00:15:35.738 rmmod nvme_keyring 00:15:35.738 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:35.738 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:15:35.738 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:15:35.738 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 78750 ']' 00:15:35.738 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 78750 00:15:35.738 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@952 -- # '[' -z 78750 ']' 00:15:35.738 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # kill -0 78750 00:15:35.738 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@957 -- # uname 00:15:35.738 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:35.738 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 78750 00:15:35.738 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:35.738 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:35.738 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 78750' 00:15:35.738 killing process with pid 78750 00:15:35.738 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@971 -- # kill 78750 00:15:35.738 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@976 -- # wait 78750 00:15:36.669 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:36.669 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:36.669 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:36.669 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:15:36.669 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:15:36.669 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:15:36.669 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:36.669 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:36.669 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:36.669 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:36.669 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:36.669 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:36.669 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:36.669 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:36.669 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:36.669 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:36.669 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:36.669 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:36.669 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:36.669 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:36.669 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:36.927 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:36.927 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:36.927 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:36.927 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:36.927 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:36.927 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@300 -- # return 0 00:15:36.927 00:15:36.927 real 0m5.581s 00:15:36.927 user 0m20.330s 00:15:36.927 sys 0m1.067s 00:15:36.927 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:36.927 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:36.927 ************************************ 00:15:36.927 END TEST nvmf_invalid 00:15:36.927 ************************************ 00:15:36.927 14:20:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:36.927 14:20:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:36.927 14:20:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:36.927 14:20:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:36.927 ************************************ 00:15:36.927 START TEST nvmf_connect_stress 00:15:36.927 ************************************ 00:15:36.927 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:36.927 * Looking for test storage... 00:15:36.927 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:36.927 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:36.927 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:15:36.928 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:36.928 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:36.928 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:36.928 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:36.928 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:36.928 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:15:36.928 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:15:36.928 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:15:36.928 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:15:36.928 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:15:36.928 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:15:36.928 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:15:36.928 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:36.928 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:15:36.928 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:15:36.928 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:36.928 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:36.928 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:15:36.928 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:15:36.928 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:36.928 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:15:36.928 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:15:36.928 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:15:36.928 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:15:36.928 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:36.928 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:15:36.928 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:15:36.928 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:36.928 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:36.928 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:15:36.928 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:36.928 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:36.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:36.928 --rc genhtml_branch_coverage=1 00:15:36.928 --rc genhtml_function_coverage=1 00:15:36.928 --rc genhtml_legend=1 00:15:36.928 --rc geninfo_all_blocks=1 00:15:36.928 --rc geninfo_unexecuted_blocks=1 00:15:36.928 00:15:36.928 ' 00:15:36.928 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:36.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:36.928 --rc genhtml_branch_coverage=1 00:15:36.928 --rc genhtml_function_coverage=1 00:15:36.928 --rc genhtml_legend=1 00:15:36.928 --rc geninfo_all_blocks=1 00:15:36.928 --rc geninfo_unexecuted_blocks=1 00:15:36.928 00:15:36.928 ' 00:15:36.928 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:36.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:36.928 --rc genhtml_branch_coverage=1 00:15:36.928 --rc genhtml_function_coverage=1 00:15:36.928 --rc genhtml_legend=1 00:15:36.928 --rc geninfo_all_blocks=1 00:15:36.928 --rc geninfo_unexecuted_blocks=1 00:15:36.928 00:15:36.928 ' 00:15:36.928 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:36.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:36.928 --rc genhtml_branch_coverage=1 00:15:36.928 --rc genhtml_function_coverage=1 00:15:36.928 --rc genhtml_legend=1 00:15:36.928 --rc geninfo_all_blocks=1 00:15:36.928 --rc geninfo_unexecuted_blocks=1 00:15:36.928 00:15:36.928 ' 00:15:36.928 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:36.928 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:15:36.928 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:36.928 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:36.928 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:36.928 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:36.928 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:36.928 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:36.928 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:36.928 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:36.928 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:36.928 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:37.185 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:15:37.185 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:15:37.185 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:37.185 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:37.185 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:37.185 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:37.185 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:37.185 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:15:37.185 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:37.185 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:37.185 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:37.185 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.185 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.185 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.185 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:15:37.185 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.185 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:15:37.185 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:37.185 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:37.185 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:37.185 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:37.185 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:37.185 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:37.185 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:37.185 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:37.185 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:37.185 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:37.185 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:15:37.185 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:37.185 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:37.185 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:37.185 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:37.185 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:37.185 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:37.185 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:37.185 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:37.185 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:37.185 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:37.185 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:37.185 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:37.186 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:37.186 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:37.186 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:37.186 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:37.186 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:37.186 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:37.186 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:37.186 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:37.186 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:37.186 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:37.186 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:37.186 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:37.186 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:37.186 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:37.186 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:37.186 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:37.186 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:37.186 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:37.186 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:37.186 Cannot find device "nvmf_init_br" 00:15:37.186 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@162 -- # true 00:15:37.186 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:37.186 Cannot find device "nvmf_init_br2" 00:15:37.186 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@163 -- # true 00:15:37.186 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:37.186 Cannot find device "nvmf_tgt_br" 00:15:37.186 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@164 -- # true 00:15:37.186 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:37.186 Cannot find device "nvmf_tgt_br2" 00:15:37.186 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@165 -- # true 00:15:37.186 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:37.186 Cannot find device "nvmf_init_br" 00:15:37.186 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@166 -- # true 00:15:37.186 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:37.186 Cannot find device "nvmf_init_br2" 00:15:37.186 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@167 -- # true 00:15:37.186 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:37.186 Cannot find device "nvmf_tgt_br" 00:15:37.186 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@168 -- # true 00:15:37.186 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:37.186 Cannot find device "nvmf_tgt_br2" 00:15:37.186 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@169 -- # true 00:15:37.186 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:37.186 Cannot find device "nvmf_br" 00:15:37.186 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@170 -- # true 00:15:37.186 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:37.186 Cannot find device "nvmf_init_if" 00:15:37.186 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@171 -- # true 00:15:37.186 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:37.186 Cannot find device "nvmf_init_if2" 00:15:37.186 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@172 -- # true 00:15:37.186 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:37.186 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:37.186 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@173 -- # true 00:15:37.186 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:37.186 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:37.186 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@174 -- # true 00:15:37.186 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:37.186 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:37.186 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:37.186 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:37.186 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:37.186 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:37.186 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:37.186 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:37.186 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:37.186 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:37.186 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:37.186 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:37.186 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:37.186 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:37.186 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:37.186 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:37.186 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:37.186 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:37.186 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:37.186 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:37.186 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:37.186 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:37.186 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:37.186 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:37.186 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:37.186 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:37.186 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:37.186 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:37.186 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:37.186 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:37.444 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:37.444 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:37.444 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:37.444 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:37.444 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:15:37.444 00:15:37.444 --- 10.0.0.3 ping statistics --- 00:15:37.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:37.444 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:15:37.444 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:37.444 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:37.444 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.033 ms 00:15:37.444 00:15:37.444 --- 10.0.0.4 ping statistics --- 00:15:37.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:37.444 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:15:37.444 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:37.444 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:37.444 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:15:37.444 00:15:37.444 --- 10.0.0.1 ping statistics --- 00:15:37.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:37.444 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:15:37.444 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:37.444 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:37.444 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.040 ms 00:15:37.444 00:15:37.444 --- 10.0.0.2 ping statistics --- 00:15:37.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:37.444 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:15:37.444 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:37.444 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@461 -- # return 0 00:15:37.444 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:37.444 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:37.444 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:37.444 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:37.444 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:37.444 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:37.444 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:37.444 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:15:37.444 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:37.444 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:37.444 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:37.444 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:37.444 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=79307 00:15:37.444 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 79307 00:15:37.444 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@833 -- # '[' -z 79307 ']' 00:15:37.444 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:37.445 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:37.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:37.445 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:37.445 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:37.445 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:37.445 [2024-11-06 14:20:04.924941] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:15:37.445 [2024-11-06 14:20:04.925072] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:37.704 [2024-11-06 14:20:05.082905] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:37.704 [2024-11-06 14:20:05.187230] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:37.704 [2024-11-06 14:20:05.187281] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:37.704 [2024-11-06 14:20:05.187292] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:37.704 [2024-11-06 14:20:05.187301] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:37.704 [2024-11-06 14:20:05.187311] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:37.704 [2024-11-06 14:20:05.188635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:37.704 [2024-11-06 14:20:05.188765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:37.704 [2024-11-06 14:20:05.188908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:38.269 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:38.269 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@866 -- # return 0 00:15:38.269 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:38.269 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:38.269 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:38.269 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:38.269 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:38.269 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.269 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:38.269 [2024-11-06 14:20:05.823560] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:38.269 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.269 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:38.269 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.269 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:38.269 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.269 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:38.269 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.269 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:38.269 [2024-11-06 14:20:05.843715] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:38.269 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.269 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:38.269 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.269 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:38.269 NULL1 00:15:38.269 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.269 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=79359 00:15:38.269 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:15:38.269 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:15:38.269 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:15:38.269 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:15:38.269 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:38.269 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:38.269 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:38.269 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:38.269 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:38.269 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:38.269 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:38.269 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:38.269 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:38.269 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:38.269 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:38.269 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:38.269 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:38.269 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:38.269 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:38.269 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:38.269 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:38.269 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:38.269 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:38.269 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:38.269 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:38.269 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:38.269 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:38.269 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:38.269 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:38.269 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:38.269 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:38.269 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:38.269 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:38.269 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:38.269 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:38.269 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:38.269 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:38.269 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:38.269 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:38.269 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:38.526 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:38.526 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:38.526 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:38.526 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:38.526 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 79359 00:15:38.526 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:38.526 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.526 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:38.784 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.784 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 79359 00:15:38.784 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:38.784 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.784 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:39.042 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.042 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 79359 00:15:39.042 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:39.042 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.042 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:39.303 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.303 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 79359 00:15:39.303 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:39.303 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.303 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:39.872 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.872 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 79359 00:15:39.872 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:39.872 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.872 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:40.131 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.131 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 79359 00:15:40.131 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:40.131 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.131 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:40.389 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.389 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 79359 00:15:40.389 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:40.389 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.389 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:40.646 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.646 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 79359 00:15:40.646 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:40.646 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.646 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:40.905 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.905 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 79359 00:15:40.905 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:40.905 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.905 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:41.469 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.469 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 79359 00:15:41.469 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:41.469 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.469 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:41.726 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.726 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 79359 00:15:41.726 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:41.726 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.726 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:41.983 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.983 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 79359 00:15:41.983 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:41.983 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.983 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:42.242 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.242 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 79359 00:15:42.242 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:42.242 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.242 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:42.500 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.500 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 79359 00:15:42.500 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:42.500 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.500 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:43.065 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.065 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 79359 00:15:43.065 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:43.065 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.065 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:43.323 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.323 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 79359 00:15:43.323 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:43.323 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.323 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:43.581 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.581 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 79359 00:15:43.581 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:43.581 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.581 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:43.838 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.838 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 79359 00:15:43.838 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:43.838 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.838 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:44.098 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.098 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 79359 00:15:44.098 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:44.098 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.098 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:44.662 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.662 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 79359 00:15:44.662 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:44.662 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.662 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:44.919 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.919 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 79359 00:15:44.919 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:44.919 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.919 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:45.177 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.177 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 79359 00:15:45.177 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:45.177 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.177 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:45.434 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.434 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 79359 00:15:45.434 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:45.434 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.434 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:45.691 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.691 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 79359 00:15:45.691 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:45.691 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.691 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:46.281 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.281 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 79359 00:15:46.281 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:46.281 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.281 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:46.538 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.538 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 79359 00:15:46.538 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:46.538 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.538 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:46.796 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.796 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 79359 00:15:46.796 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:46.796 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.796 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:47.057 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.057 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 79359 00:15:47.057 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:47.057 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.057 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:47.314 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.314 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 79359 00:15:47.314 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:47.314 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.314 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:47.881 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.881 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 79359 00:15:47.881 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:47.881 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.881 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:48.139 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.139 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 79359 00:15:48.139 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:48.139 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.139 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:48.396 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.396 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 79359 00:15:48.396 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:48.396 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.396 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:48.654 Testing NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:48.654 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.654 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 79359 00:15:48.654 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (79359) - No such process 00:15:48.654 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 79359 00:15:48.654 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:15:48.654 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:48.654 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:15:48.654 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:48.654 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:15:48.654 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:48.654 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:15:48.654 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:48.654 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:48.654 rmmod nvme_tcp 00:15:48.654 rmmod nvme_fabrics 00:15:48.654 rmmod nvme_keyring 00:15:48.654 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:48.912 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:15:48.912 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:15:48.912 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 79307 ']' 00:15:48.912 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 79307 00:15:48.912 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@952 -- # '[' -z 79307 ']' 00:15:48.912 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # kill -0 79307 00:15:48.912 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # uname 00:15:48.912 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:48.912 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79307 00:15:48.912 killing process with pid 79307 00:15:48.912 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:15:48.912 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:15:48.912 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79307' 00:15:48.912 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@971 -- # kill 79307 00:15:48.912 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@976 -- # wait 79307 00:15:49.476 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:49.476 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:49.476 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:49.476 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:15:49.476 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:49.476 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:15:49.476 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:15:49.476 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:49.476 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:49.476 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:49.476 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:49.476 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:49.476 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:49.476 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:49.476 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:49.476 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:49.476 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:49.476 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:49.476 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:49.476 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:49.476 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:49.476 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:49.735 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:49.735 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:49.735 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:49.735 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:49.735 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@300 -- # return 0 00:15:49.735 00:15:49.735 real 0m12.737s 00:15:49.735 user 0m41.975s 00:15:49.735 sys 0m2.910s 00:15:49.735 ************************************ 00:15:49.735 END TEST nvmf_connect_stress 00:15:49.735 ************************************ 00:15:49.735 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:49.735 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:49.735 14:20:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:49.735 14:20:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:49.735 14:20:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:49.735 14:20:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:49.735 ************************************ 00:15:49.735 START TEST nvmf_fused_ordering 00:15:49.735 ************************************ 00:15:49.735 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:49.735 * Looking for test storage... 00:15:49.735 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:49.735 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:49.735 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:49.735 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lcov --version 00:15:49.735 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:49.735 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:49.735 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:49.735 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:49.735 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:15:49.735 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:15:49.735 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:15:49.735 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:15:49.735 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:15:49.735 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:15:49.735 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:15:49.735 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:49.735 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:15:49.735 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:15:49.736 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:49.736 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:49.736 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:15:49.736 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:15:49.736 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:49.736 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:15:49.736 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:15:49.736 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:15:49.736 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:15:49.736 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:49.736 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:15:49.736 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:15:49.736 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:49.736 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:49.736 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:15:49.736 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:49.736 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:49.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:49.736 --rc genhtml_branch_coverage=1 00:15:49.736 --rc genhtml_function_coverage=1 00:15:49.736 --rc genhtml_legend=1 00:15:49.736 --rc geninfo_all_blocks=1 00:15:49.736 --rc geninfo_unexecuted_blocks=1 00:15:49.736 00:15:49.736 ' 00:15:49.736 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:49.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:49.736 --rc genhtml_branch_coverage=1 00:15:49.736 --rc genhtml_function_coverage=1 00:15:49.736 --rc genhtml_legend=1 00:15:49.736 --rc geninfo_all_blocks=1 00:15:49.736 --rc geninfo_unexecuted_blocks=1 00:15:49.736 00:15:49.736 ' 00:15:49.736 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:49.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:49.736 --rc genhtml_branch_coverage=1 00:15:49.736 --rc genhtml_function_coverage=1 00:15:49.736 --rc genhtml_legend=1 00:15:49.736 --rc geninfo_all_blocks=1 00:15:49.736 --rc geninfo_unexecuted_blocks=1 00:15:49.736 00:15:49.736 ' 00:15:49.736 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:49.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:49.736 --rc genhtml_branch_coverage=1 00:15:49.736 --rc genhtml_function_coverage=1 00:15:49.736 --rc genhtml_legend=1 00:15:49.736 --rc geninfo_all_blocks=1 00:15:49.736 --rc geninfo_unexecuted_blocks=1 00:15:49.736 00:15:49.736 ' 00:15:49.736 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:49.736 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:15:49.736 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:49.736 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:49.736 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:49.736 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:49.736 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:49.736 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:49.736 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:49.736 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:49.736 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:49.736 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:49.736 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:15:49.736 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:15:49.736 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:49.736 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:49.736 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:49.736 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:49.736 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:49.736 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:15:49.736 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:49.736 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:49.736 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:49.736 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.736 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.736 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.736 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:15:49.736 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.736 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:15:49.736 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:49.736 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:49.736 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:49.736 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:49.736 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:49.736 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:49.736 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:49.736 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:49.736 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:49.736 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:49.736 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:15:49.736 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:49.736 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:49.736 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:49.736 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:49.736 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:49.736 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:49.736 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:49.736 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:49.736 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:49.736 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:49.736 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:49.736 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:49.736 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:49.736 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:49.737 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:49.737 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:49.737 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:49.737 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:49.737 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:49.737 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:49.737 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:49.737 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:49.737 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:49.737 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:49.737 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:49.737 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:49.737 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:49.737 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:49.737 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:49.737 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:49.737 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:49.737 Cannot find device "nvmf_init_br" 00:15:49.737 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@162 -- # true 00:15:49.737 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:49.737 Cannot find device "nvmf_init_br2" 00:15:49.737 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@163 -- # true 00:15:49.737 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:49.995 Cannot find device "nvmf_tgt_br" 00:15:49.995 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@164 -- # true 00:15:49.996 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:49.996 Cannot find device "nvmf_tgt_br2" 00:15:49.996 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@165 -- # true 00:15:49.996 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:49.996 Cannot find device "nvmf_init_br" 00:15:49.996 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@166 -- # true 00:15:49.996 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:49.996 Cannot find device "nvmf_init_br2" 00:15:49.996 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@167 -- # true 00:15:49.996 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:49.996 Cannot find device "nvmf_tgt_br" 00:15:49.996 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@168 -- # true 00:15:49.996 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:49.996 Cannot find device "nvmf_tgt_br2" 00:15:49.996 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@169 -- # true 00:15:49.996 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:49.996 Cannot find device "nvmf_br" 00:15:49.996 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@170 -- # true 00:15:49.996 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:49.996 Cannot find device "nvmf_init_if" 00:15:49.996 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@171 -- # true 00:15:49.996 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:49.996 Cannot find device "nvmf_init_if2" 00:15:49.996 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@172 -- # true 00:15:49.996 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:49.996 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:49.996 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@173 -- # true 00:15:49.996 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:49.996 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:49.996 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@174 -- # true 00:15:49.996 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:49.996 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:49.996 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:49.996 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:49.996 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:49.996 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:49.996 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:49.996 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:49.996 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:49.996 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:49.996 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:49.996 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:49.996 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:49.996 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:49.996 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:49.996 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:49.996 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:49.996 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:49.996 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:49.996 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:49.996 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:49.996 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:49.996 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:49.996 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:49.996 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:49.996 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:49.996 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:49.996 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:49.996 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:49.996 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:49.996 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:49.996 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:49.996 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:49.996 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:49.996 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:15:49.996 00:15:49.996 --- 10.0.0.3 ping statistics --- 00:15:49.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.996 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:15:49.996 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:49.996 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:49.996 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:15:49.996 00:15:49.996 --- 10.0.0.4 ping statistics --- 00:15:49.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.996 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:15:49.996 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:49.996 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:49.996 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:15:49.996 00:15:49.996 --- 10.0.0.1 ping statistics --- 00:15:49.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.996 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:15:49.996 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:49.996 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:49.996 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:15:49.996 00:15:49.996 --- 10.0.0.2 ping statistics --- 00:15:49.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.996 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:15:49.996 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:49.996 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@461 -- # return 0 00:15:49.996 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:49.996 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:49.996 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:49.996 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:49.996 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:49.996 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:49.996 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:50.254 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:15:50.254 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:50.254 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:50.254 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:50.254 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=79744 00:15:50.254 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:50.254 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 79744 00:15:50.254 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # '[' -z 79744 ']' 00:15:50.254 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:50.254 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:50.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:50.254 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:50.254 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:50.254 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:50.254 [2024-11-06 14:20:17.714691] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:15:50.255 [2024-11-06 14:20:17.714806] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:50.255 [2024-11-06 14:20:17.872038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:50.551 [2024-11-06 14:20:17.971310] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:50.551 [2024-11-06 14:20:17.971361] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:50.551 [2024-11-06 14:20:17.971373] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:50.551 [2024-11-06 14:20:17.971388] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:50.551 [2024-11-06 14:20:17.971397] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:50.551 [2024-11-06 14:20:17.972263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:51.116 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:51.116 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@866 -- # return 0 00:15:51.116 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:51.116 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:51.116 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:51.116 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:51.116 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:51.116 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.116 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:51.116 [2024-11-06 14:20:18.563494] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:51.116 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.116 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:51.116 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.116 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:51.116 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.116 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:51.116 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.116 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:51.116 [2024-11-06 14:20:18.579597] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:51.116 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.116 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:51.116 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.116 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:51.116 NULL1 00:15:51.116 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.117 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:15:51.117 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.117 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:51.117 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.117 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:15:51.117 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.117 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:51.117 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.117 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:51.117 [2024-11-06 14:20:18.643915] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:15:51.117 [2024-11-06 14:20:18.644008] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79794 ] 00:15:51.683 Attached to nqn.2016-06.io.spdk:cnode1 00:15:51.683 Namespace ID: 1 size: 1GB 00:15:51.683 fused_ordering(0) 00:15:51.683 fused_ordering(1) 00:15:51.683 fused_ordering(2) 00:15:51.683 fused_ordering(3) 00:15:51.683 fused_ordering(4) 00:15:51.683 fused_ordering(5) 00:15:51.683 fused_ordering(6) 00:15:51.683 fused_ordering(7) 00:15:51.683 fused_ordering(8) 00:15:51.683 fused_ordering(9) 00:15:51.683 fused_ordering(10) 00:15:51.683 fused_ordering(11) 00:15:51.683 fused_ordering(12) 00:15:51.683 fused_ordering(13) 00:15:51.683 fused_ordering(14) 00:15:51.683 fused_ordering(15) 00:15:51.683 fused_ordering(16) 00:15:51.683 fused_ordering(17) 00:15:51.683 fused_ordering(18) 00:15:51.683 fused_ordering(19) 00:15:51.683 fused_ordering(20) 00:15:51.683 fused_ordering(21) 00:15:51.683 fused_ordering(22) 00:15:51.683 fused_ordering(23) 00:15:51.683 fused_ordering(24) 00:15:51.683 fused_ordering(25) 00:15:51.683 fused_ordering(26) 00:15:51.683 fused_ordering(27) 00:15:51.683 fused_ordering(28) 00:15:51.683 fused_ordering(29) 00:15:51.683 fused_ordering(30) 00:15:51.683 fused_ordering(31) 00:15:51.683 fused_ordering(32) 00:15:51.683 fused_ordering(33) 00:15:51.683 fused_ordering(34) 00:15:51.683 fused_ordering(35) 00:15:51.683 fused_ordering(36) 00:15:51.683 fused_ordering(37) 00:15:51.683 fused_ordering(38) 00:15:51.683 fused_ordering(39) 00:15:51.683 fused_ordering(40) 00:15:51.683 fused_ordering(41) 00:15:51.683 fused_ordering(42) 00:15:51.683 fused_ordering(43) 00:15:51.683 fused_ordering(44) 00:15:51.683 fused_ordering(45) 00:15:51.683 fused_ordering(46) 00:15:51.683 fused_ordering(47) 00:15:51.683 fused_ordering(48) 00:15:51.683 fused_ordering(49) 00:15:51.683 fused_ordering(50) 00:15:51.683 fused_ordering(51) 00:15:51.683 fused_ordering(52) 00:15:51.683 fused_ordering(53) 00:15:51.683 fused_ordering(54) 00:15:51.683 fused_ordering(55) 00:15:51.683 fused_ordering(56) 00:15:51.683 fused_ordering(57) 00:15:51.683 fused_ordering(58) 00:15:51.683 fused_ordering(59) 00:15:51.683 fused_ordering(60) 00:15:51.683 fused_ordering(61) 00:15:51.683 fused_ordering(62) 00:15:51.683 fused_ordering(63) 00:15:51.683 fused_ordering(64) 00:15:51.683 fused_ordering(65) 00:15:51.683 fused_ordering(66) 00:15:51.683 fused_ordering(67) 00:15:51.683 fused_ordering(68) 00:15:51.683 fused_ordering(69) 00:15:51.683 fused_ordering(70) 00:15:51.683 fused_ordering(71) 00:15:51.683 fused_ordering(72) 00:15:51.683 fused_ordering(73) 00:15:51.683 fused_ordering(74) 00:15:51.683 fused_ordering(75) 00:15:51.683 fused_ordering(76) 00:15:51.683 fused_ordering(77) 00:15:51.683 fused_ordering(78) 00:15:51.683 fused_ordering(79) 00:15:51.683 fused_ordering(80) 00:15:51.683 fused_ordering(81) 00:15:51.683 fused_ordering(82) 00:15:51.683 fused_ordering(83) 00:15:51.683 fused_ordering(84) 00:15:51.683 fused_ordering(85) 00:15:51.683 fused_ordering(86) 00:15:51.683 fused_ordering(87) 00:15:51.683 fused_ordering(88) 00:15:51.683 fused_ordering(89) 00:15:51.683 fused_ordering(90) 00:15:51.683 fused_ordering(91) 00:15:51.683 fused_ordering(92) 00:15:51.683 fused_ordering(93) 00:15:51.683 fused_ordering(94) 00:15:51.683 fused_ordering(95) 00:15:51.683 fused_ordering(96) 00:15:51.683 fused_ordering(97) 00:15:51.683 fused_ordering(98) 00:15:51.683 fused_ordering(99) 00:15:51.683 fused_ordering(100) 00:15:51.683 fused_ordering(101) 00:15:51.683 fused_ordering(102) 00:15:51.683 fused_ordering(103) 00:15:51.683 fused_ordering(104) 00:15:51.683 fused_ordering(105) 00:15:51.683 fused_ordering(106) 00:15:51.683 fused_ordering(107) 00:15:51.683 fused_ordering(108) 00:15:51.683 fused_ordering(109) 00:15:51.683 fused_ordering(110) 00:15:51.683 fused_ordering(111) 00:15:51.683 fused_ordering(112) 00:15:51.683 fused_ordering(113) 00:15:51.683 fused_ordering(114) 00:15:51.683 fused_ordering(115) 00:15:51.683 fused_ordering(116) 00:15:51.683 fused_ordering(117) 00:15:51.683 fused_ordering(118) 00:15:51.683 fused_ordering(119) 00:15:51.683 fused_ordering(120) 00:15:51.683 fused_ordering(121) 00:15:51.683 fused_ordering(122) 00:15:51.683 fused_ordering(123) 00:15:51.683 fused_ordering(124) 00:15:51.683 fused_ordering(125) 00:15:51.683 fused_ordering(126) 00:15:51.683 fused_ordering(127) 00:15:51.683 fused_ordering(128) 00:15:51.683 fused_ordering(129) 00:15:51.683 fused_ordering(130) 00:15:51.683 fused_ordering(131) 00:15:51.683 fused_ordering(132) 00:15:51.683 fused_ordering(133) 00:15:51.683 fused_ordering(134) 00:15:51.683 fused_ordering(135) 00:15:51.683 fused_ordering(136) 00:15:51.683 fused_ordering(137) 00:15:51.683 fused_ordering(138) 00:15:51.683 fused_ordering(139) 00:15:51.683 fused_ordering(140) 00:15:51.683 fused_ordering(141) 00:15:51.683 fused_ordering(142) 00:15:51.683 fused_ordering(143) 00:15:51.683 fused_ordering(144) 00:15:51.683 fused_ordering(145) 00:15:51.683 fused_ordering(146) 00:15:51.683 fused_ordering(147) 00:15:51.683 fused_ordering(148) 00:15:51.684 fused_ordering(149) 00:15:51.684 fused_ordering(150) 00:15:51.684 fused_ordering(151) 00:15:51.684 fused_ordering(152) 00:15:51.684 fused_ordering(153) 00:15:51.684 fused_ordering(154) 00:15:51.684 fused_ordering(155) 00:15:51.684 fused_ordering(156) 00:15:51.684 fused_ordering(157) 00:15:51.684 fused_ordering(158) 00:15:51.684 fused_ordering(159) 00:15:51.684 fused_ordering(160) 00:15:51.684 fused_ordering(161) 00:15:51.684 fused_ordering(162) 00:15:51.684 fused_ordering(163) 00:15:51.684 fused_ordering(164) 00:15:51.684 fused_ordering(165) 00:15:51.684 fused_ordering(166) 00:15:51.684 fused_ordering(167) 00:15:51.684 fused_ordering(168) 00:15:51.684 fused_ordering(169) 00:15:51.684 fused_ordering(170) 00:15:51.684 fused_ordering(171) 00:15:51.684 fused_ordering(172) 00:15:51.684 fused_ordering(173) 00:15:51.684 fused_ordering(174) 00:15:51.684 fused_ordering(175) 00:15:51.684 fused_ordering(176) 00:15:51.684 fused_ordering(177) 00:15:51.684 fused_ordering(178) 00:15:51.684 fused_ordering(179) 00:15:51.684 fused_ordering(180) 00:15:51.684 fused_ordering(181) 00:15:51.684 fused_ordering(182) 00:15:51.684 fused_ordering(183) 00:15:51.684 fused_ordering(184) 00:15:51.684 fused_ordering(185) 00:15:51.684 fused_ordering(186) 00:15:51.684 fused_ordering(187) 00:15:51.684 fused_ordering(188) 00:15:51.684 fused_ordering(189) 00:15:51.684 fused_ordering(190) 00:15:51.684 fused_ordering(191) 00:15:51.684 fused_ordering(192) 00:15:51.684 fused_ordering(193) 00:15:51.684 fused_ordering(194) 00:15:51.684 fused_ordering(195) 00:15:51.684 fused_ordering(196) 00:15:51.684 fused_ordering(197) 00:15:51.684 fused_ordering(198) 00:15:51.684 fused_ordering(199) 00:15:51.684 fused_ordering(200) 00:15:51.684 fused_ordering(201) 00:15:51.684 fused_ordering(202) 00:15:51.684 fused_ordering(203) 00:15:51.684 fused_ordering(204) 00:15:51.684 fused_ordering(205) 00:15:51.942 fused_ordering(206) 00:15:51.942 fused_ordering(207) 00:15:51.942 fused_ordering(208) 00:15:51.942 fused_ordering(209) 00:15:51.942 fused_ordering(210) 00:15:51.942 fused_ordering(211) 00:15:51.942 fused_ordering(212) 00:15:51.942 fused_ordering(213) 00:15:51.942 fused_ordering(214) 00:15:51.942 fused_ordering(215) 00:15:51.942 fused_ordering(216) 00:15:51.942 fused_ordering(217) 00:15:51.942 fused_ordering(218) 00:15:51.942 fused_ordering(219) 00:15:51.942 fused_ordering(220) 00:15:51.942 fused_ordering(221) 00:15:51.942 fused_ordering(222) 00:15:51.942 fused_ordering(223) 00:15:51.942 fused_ordering(224) 00:15:51.942 fused_ordering(225) 00:15:51.942 fused_ordering(226) 00:15:51.942 fused_ordering(227) 00:15:51.942 fused_ordering(228) 00:15:51.942 fused_ordering(229) 00:15:51.942 fused_ordering(230) 00:15:51.942 fused_ordering(231) 00:15:51.942 fused_ordering(232) 00:15:51.942 fused_ordering(233) 00:15:51.942 fused_ordering(234) 00:15:51.942 fused_ordering(235) 00:15:51.942 fused_ordering(236) 00:15:51.942 fused_ordering(237) 00:15:51.942 fused_ordering(238) 00:15:51.942 fused_ordering(239) 00:15:51.942 fused_ordering(240) 00:15:51.942 fused_ordering(241) 00:15:51.942 fused_ordering(242) 00:15:51.942 fused_ordering(243) 00:15:51.942 fused_ordering(244) 00:15:51.942 fused_ordering(245) 00:15:51.942 fused_ordering(246) 00:15:51.942 fused_ordering(247) 00:15:51.942 fused_ordering(248) 00:15:51.942 fused_ordering(249) 00:15:51.942 fused_ordering(250) 00:15:51.942 fused_ordering(251) 00:15:51.942 fused_ordering(252) 00:15:51.942 fused_ordering(253) 00:15:51.942 fused_ordering(254) 00:15:51.942 fused_ordering(255) 00:15:51.942 fused_ordering(256) 00:15:51.942 fused_ordering(257) 00:15:51.942 fused_ordering(258) 00:15:51.942 fused_ordering(259) 00:15:51.942 fused_ordering(260) 00:15:51.942 fused_ordering(261) 00:15:51.942 fused_ordering(262) 00:15:51.942 fused_ordering(263) 00:15:51.942 fused_ordering(264) 00:15:51.942 fused_ordering(265) 00:15:51.942 fused_ordering(266) 00:15:51.942 fused_ordering(267) 00:15:51.942 fused_ordering(268) 00:15:51.942 fused_ordering(269) 00:15:51.942 fused_ordering(270) 00:15:51.942 fused_ordering(271) 00:15:51.942 fused_ordering(272) 00:15:51.942 fused_ordering(273) 00:15:51.942 fused_ordering(274) 00:15:51.942 fused_ordering(275) 00:15:51.942 fused_ordering(276) 00:15:51.942 fused_ordering(277) 00:15:51.942 fused_ordering(278) 00:15:51.942 fused_ordering(279) 00:15:51.942 fused_ordering(280) 00:15:51.942 fused_ordering(281) 00:15:51.942 fused_ordering(282) 00:15:51.942 fused_ordering(283) 00:15:51.942 fused_ordering(284) 00:15:51.942 fused_ordering(285) 00:15:51.942 fused_ordering(286) 00:15:51.942 fused_ordering(287) 00:15:51.942 fused_ordering(288) 00:15:51.942 fused_ordering(289) 00:15:51.942 fused_ordering(290) 00:15:51.942 fused_ordering(291) 00:15:51.942 fused_ordering(292) 00:15:51.942 fused_ordering(293) 00:15:51.942 fused_ordering(294) 00:15:51.942 fused_ordering(295) 00:15:51.942 fused_ordering(296) 00:15:51.942 fused_ordering(297) 00:15:51.942 fused_ordering(298) 00:15:51.942 fused_ordering(299) 00:15:51.942 fused_ordering(300) 00:15:51.942 fused_ordering(301) 00:15:51.942 fused_ordering(302) 00:15:51.942 fused_ordering(303) 00:15:51.942 fused_ordering(304) 00:15:51.942 fused_ordering(305) 00:15:51.942 fused_ordering(306) 00:15:51.942 fused_ordering(307) 00:15:51.942 fused_ordering(308) 00:15:51.942 fused_ordering(309) 00:15:51.942 fused_ordering(310) 00:15:51.942 fused_ordering(311) 00:15:51.942 fused_ordering(312) 00:15:51.942 fused_ordering(313) 00:15:51.942 fused_ordering(314) 00:15:51.942 fused_ordering(315) 00:15:51.942 fused_ordering(316) 00:15:51.942 fused_ordering(317) 00:15:51.942 fused_ordering(318) 00:15:51.942 fused_ordering(319) 00:15:51.942 fused_ordering(320) 00:15:51.942 fused_ordering(321) 00:15:51.942 fused_ordering(322) 00:15:51.942 fused_ordering(323) 00:15:51.942 fused_ordering(324) 00:15:51.942 fused_ordering(325) 00:15:51.942 fused_ordering(326) 00:15:51.942 fused_ordering(327) 00:15:51.942 fused_ordering(328) 00:15:51.942 fused_ordering(329) 00:15:51.942 fused_ordering(330) 00:15:51.942 fused_ordering(331) 00:15:51.942 fused_ordering(332) 00:15:51.942 fused_ordering(333) 00:15:51.942 fused_ordering(334) 00:15:51.942 fused_ordering(335) 00:15:51.942 fused_ordering(336) 00:15:51.942 fused_ordering(337) 00:15:51.942 fused_ordering(338) 00:15:51.942 fused_ordering(339) 00:15:51.942 fused_ordering(340) 00:15:51.942 fused_ordering(341) 00:15:51.942 fused_ordering(342) 00:15:51.942 fused_ordering(343) 00:15:51.942 fused_ordering(344) 00:15:51.942 fused_ordering(345) 00:15:51.942 fused_ordering(346) 00:15:51.942 fused_ordering(347) 00:15:51.942 fused_ordering(348) 00:15:51.942 fused_ordering(349) 00:15:51.942 fused_ordering(350) 00:15:51.942 fused_ordering(351) 00:15:51.942 fused_ordering(352) 00:15:51.942 fused_ordering(353) 00:15:51.942 fused_ordering(354) 00:15:51.943 fused_ordering(355) 00:15:51.943 fused_ordering(356) 00:15:51.943 fused_ordering(357) 00:15:51.943 fused_ordering(358) 00:15:51.943 fused_ordering(359) 00:15:51.943 fused_ordering(360) 00:15:51.943 fused_ordering(361) 00:15:51.943 fused_ordering(362) 00:15:51.943 fused_ordering(363) 00:15:51.943 fused_ordering(364) 00:15:51.943 fused_ordering(365) 00:15:51.943 fused_ordering(366) 00:15:51.943 fused_ordering(367) 00:15:51.943 fused_ordering(368) 00:15:51.943 fused_ordering(369) 00:15:51.943 fused_ordering(370) 00:15:51.943 fused_ordering(371) 00:15:51.943 fused_ordering(372) 00:15:51.943 fused_ordering(373) 00:15:51.943 fused_ordering(374) 00:15:51.943 fused_ordering(375) 00:15:51.943 fused_ordering(376) 00:15:51.943 fused_ordering(377) 00:15:51.943 fused_ordering(378) 00:15:51.943 fused_ordering(379) 00:15:51.943 fused_ordering(380) 00:15:51.943 fused_ordering(381) 00:15:51.943 fused_ordering(382) 00:15:51.943 fused_ordering(383) 00:15:51.943 fused_ordering(384) 00:15:51.943 fused_ordering(385) 00:15:51.943 fused_ordering(386) 00:15:51.943 fused_ordering(387) 00:15:51.943 fused_ordering(388) 00:15:51.943 fused_ordering(389) 00:15:51.943 fused_ordering(390) 00:15:51.943 fused_ordering(391) 00:15:51.943 fused_ordering(392) 00:15:51.943 fused_ordering(393) 00:15:51.943 fused_ordering(394) 00:15:51.943 fused_ordering(395) 00:15:51.943 fused_ordering(396) 00:15:51.943 fused_ordering(397) 00:15:51.943 fused_ordering(398) 00:15:51.943 fused_ordering(399) 00:15:51.943 fused_ordering(400) 00:15:51.943 fused_ordering(401) 00:15:51.943 fused_ordering(402) 00:15:51.943 fused_ordering(403) 00:15:51.943 fused_ordering(404) 00:15:51.943 fused_ordering(405) 00:15:51.943 fused_ordering(406) 00:15:51.943 fused_ordering(407) 00:15:51.943 fused_ordering(408) 00:15:51.943 fused_ordering(409) 00:15:51.943 fused_ordering(410) 00:15:52.202 fused_ordering(411) 00:15:52.202 fused_ordering(412) 00:15:52.202 fused_ordering(413) 00:15:52.202 fused_ordering(414) 00:15:52.202 fused_ordering(415) 00:15:52.202 fused_ordering(416) 00:15:52.202 fused_ordering(417) 00:15:52.202 fused_ordering(418) 00:15:52.202 fused_ordering(419) 00:15:52.202 fused_ordering(420) 00:15:52.202 fused_ordering(421) 00:15:52.202 fused_ordering(422) 00:15:52.202 fused_ordering(423) 00:15:52.202 fused_ordering(424) 00:15:52.202 fused_ordering(425) 00:15:52.202 fused_ordering(426) 00:15:52.202 fused_ordering(427) 00:15:52.202 fused_ordering(428) 00:15:52.202 fused_ordering(429) 00:15:52.202 fused_ordering(430) 00:15:52.202 fused_ordering(431) 00:15:52.202 fused_ordering(432) 00:15:52.202 fused_ordering(433) 00:15:52.202 fused_ordering(434) 00:15:52.202 fused_ordering(435) 00:15:52.202 fused_ordering(436) 00:15:52.202 fused_ordering(437) 00:15:52.202 fused_ordering(438) 00:15:52.202 fused_ordering(439) 00:15:52.202 fused_ordering(440) 00:15:52.202 fused_ordering(441) 00:15:52.202 fused_ordering(442) 00:15:52.202 fused_ordering(443) 00:15:52.202 fused_ordering(444) 00:15:52.202 fused_ordering(445) 00:15:52.202 fused_ordering(446) 00:15:52.202 fused_ordering(447) 00:15:52.202 fused_ordering(448) 00:15:52.202 fused_ordering(449) 00:15:52.202 fused_ordering(450) 00:15:52.202 fused_ordering(451) 00:15:52.202 fused_ordering(452) 00:15:52.202 fused_ordering(453) 00:15:52.202 fused_ordering(454) 00:15:52.202 fused_ordering(455) 00:15:52.202 fused_ordering(456) 00:15:52.202 fused_ordering(457) 00:15:52.202 fused_ordering(458) 00:15:52.202 fused_ordering(459) 00:15:52.202 fused_ordering(460) 00:15:52.202 fused_ordering(461) 00:15:52.202 fused_ordering(462) 00:15:52.202 fused_ordering(463) 00:15:52.202 fused_ordering(464) 00:15:52.202 fused_ordering(465) 00:15:52.202 fused_ordering(466) 00:15:52.202 fused_ordering(467) 00:15:52.202 fused_ordering(468) 00:15:52.202 fused_ordering(469) 00:15:52.202 fused_ordering(470) 00:15:52.202 fused_ordering(471) 00:15:52.202 fused_ordering(472) 00:15:52.202 fused_ordering(473) 00:15:52.202 fused_ordering(474) 00:15:52.202 fused_ordering(475) 00:15:52.202 fused_ordering(476) 00:15:52.202 fused_ordering(477) 00:15:52.202 fused_ordering(478) 00:15:52.202 fused_ordering(479) 00:15:52.202 fused_ordering(480) 00:15:52.202 fused_ordering(481) 00:15:52.202 fused_ordering(482) 00:15:52.202 fused_ordering(483) 00:15:52.202 fused_ordering(484) 00:15:52.202 fused_ordering(485) 00:15:52.202 fused_ordering(486) 00:15:52.202 fused_ordering(487) 00:15:52.202 fused_ordering(488) 00:15:52.202 fused_ordering(489) 00:15:52.202 fused_ordering(490) 00:15:52.202 fused_ordering(491) 00:15:52.202 fused_ordering(492) 00:15:52.202 fused_ordering(493) 00:15:52.202 fused_ordering(494) 00:15:52.202 fused_ordering(495) 00:15:52.202 fused_ordering(496) 00:15:52.202 fused_ordering(497) 00:15:52.202 fused_ordering(498) 00:15:52.202 fused_ordering(499) 00:15:52.202 fused_ordering(500) 00:15:52.202 fused_ordering(501) 00:15:52.202 fused_ordering(502) 00:15:52.202 fused_ordering(503) 00:15:52.202 fused_ordering(504) 00:15:52.202 fused_ordering(505) 00:15:52.202 fused_ordering(506) 00:15:52.202 fused_ordering(507) 00:15:52.202 fused_ordering(508) 00:15:52.202 fused_ordering(509) 00:15:52.202 fused_ordering(510) 00:15:52.202 fused_ordering(511) 00:15:52.202 fused_ordering(512) 00:15:52.202 fused_ordering(513) 00:15:52.202 fused_ordering(514) 00:15:52.202 fused_ordering(515) 00:15:52.202 fused_ordering(516) 00:15:52.202 fused_ordering(517) 00:15:52.202 fused_ordering(518) 00:15:52.202 fused_ordering(519) 00:15:52.202 fused_ordering(520) 00:15:52.202 fused_ordering(521) 00:15:52.202 fused_ordering(522) 00:15:52.202 fused_ordering(523) 00:15:52.202 fused_ordering(524) 00:15:52.202 fused_ordering(525) 00:15:52.202 fused_ordering(526) 00:15:52.202 fused_ordering(527) 00:15:52.202 fused_ordering(528) 00:15:52.202 fused_ordering(529) 00:15:52.202 fused_ordering(530) 00:15:52.202 fused_ordering(531) 00:15:52.202 fused_ordering(532) 00:15:52.202 fused_ordering(533) 00:15:52.202 fused_ordering(534) 00:15:52.202 fused_ordering(535) 00:15:52.202 fused_ordering(536) 00:15:52.202 fused_ordering(537) 00:15:52.202 fused_ordering(538) 00:15:52.202 fused_ordering(539) 00:15:52.202 fused_ordering(540) 00:15:52.202 fused_ordering(541) 00:15:52.202 fused_ordering(542) 00:15:52.202 fused_ordering(543) 00:15:52.202 fused_ordering(544) 00:15:52.202 fused_ordering(545) 00:15:52.202 fused_ordering(546) 00:15:52.202 fused_ordering(547) 00:15:52.202 fused_ordering(548) 00:15:52.202 fused_ordering(549) 00:15:52.203 fused_ordering(550) 00:15:52.203 fused_ordering(551) 00:15:52.203 fused_ordering(552) 00:15:52.203 fused_ordering(553) 00:15:52.203 fused_ordering(554) 00:15:52.203 fused_ordering(555) 00:15:52.203 fused_ordering(556) 00:15:52.203 fused_ordering(557) 00:15:52.203 fused_ordering(558) 00:15:52.203 fused_ordering(559) 00:15:52.203 fused_ordering(560) 00:15:52.203 fused_ordering(561) 00:15:52.203 fused_ordering(562) 00:15:52.203 fused_ordering(563) 00:15:52.203 fused_ordering(564) 00:15:52.203 fused_ordering(565) 00:15:52.203 fused_ordering(566) 00:15:52.203 fused_ordering(567) 00:15:52.203 fused_ordering(568) 00:15:52.203 fused_ordering(569) 00:15:52.203 fused_ordering(570) 00:15:52.203 fused_ordering(571) 00:15:52.203 fused_ordering(572) 00:15:52.203 fused_ordering(573) 00:15:52.203 fused_ordering(574) 00:15:52.203 fused_ordering(575) 00:15:52.203 fused_ordering(576) 00:15:52.203 fused_ordering(577) 00:15:52.203 fused_ordering(578) 00:15:52.203 fused_ordering(579) 00:15:52.203 fused_ordering(580) 00:15:52.203 fused_ordering(581) 00:15:52.203 fused_ordering(582) 00:15:52.203 fused_ordering(583) 00:15:52.203 fused_ordering(584) 00:15:52.203 fused_ordering(585) 00:15:52.203 fused_ordering(586) 00:15:52.203 fused_ordering(587) 00:15:52.203 fused_ordering(588) 00:15:52.203 fused_ordering(589) 00:15:52.203 fused_ordering(590) 00:15:52.203 fused_ordering(591) 00:15:52.203 fused_ordering(592) 00:15:52.203 fused_ordering(593) 00:15:52.203 fused_ordering(594) 00:15:52.203 fused_ordering(595) 00:15:52.203 fused_ordering(596) 00:15:52.203 fused_ordering(597) 00:15:52.203 fused_ordering(598) 00:15:52.203 fused_ordering(599) 00:15:52.203 fused_ordering(600) 00:15:52.203 fused_ordering(601) 00:15:52.203 fused_ordering(602) 00:15:52.203 fused_ordering(603) 00:15:52.203 fused_ordering(604) 00:15:52.203 fused_ordering(605) 00:15:52.203 fused_ordering(606) 00:15:52.203 fused_ordering(607) 00:15:52.203 fused_ordering(608) 00:15:52.203 fused_ordering(609) 00:15:52.203 fused_ordering(610) 00:15:52.203 fused_ordering(611) 00:15:52.203 fused_ordering(612) 00:15:52.203 fused_ordering(613) 00:15:52.203 fused_ordering(614) 00:15:52.203 fused_ordering(615) 00:15:52.770 fused_ordering(616) 00:15:52.770 fused_ordering(617) 00:15:52.770 fused_ordering(618) 00:15:52.770 fused_ordering(619) 00:15:52.770 fused_ordering(620) 00:15:52.770 fused_ordering(621) 00:15:52.770 fused_ordering(622) 00:15:52.770 fused_ordering(623) 00:15:52.770 fused_ordering(624) 00:15:52.770 fused_ordering(625) 00:15:52.770 fused_ordering(626) 00:15:52.770 fused_ordering(627) 00:15:52.770 fused_ordering(628) 00:15:52.770 fused_ordering(629) 00:15:52.770 fused_ordering(630) 00:15:52.770 fused_ordering(631) 00:15:52.770 fused_ordering(632) 00:15:52.770 fused_ordering(633) 00:15:52.770 fused_ordering(634) 00:15:52.770 fused_ordering(635) 00:15:52.770 fused_ordering(636) 00:15:52.770 fused_ordering(637) 00:15:52.770 fused_ordering(638) 00:15:52.770 fused_ordering(639) 00:15:52.770 fused_ordering(640) 00:15:52.770 fused_ordering(641) 00:15:52.770 fused_ordering(642) 00:15:52.770 fused_ordering(643) 00:15:52.770 fused_ordering(644) 00:15:52.770 fused_ordering(645) 00:15:52.770 fused_ordering(646) 00:15:52.770 fused_ordering(647) 00:15:52.770 fused_ordering(648) 00:15:52.770 fused_ordering(649) 00:15:52.770 fused_ordering(650) 00:15:52.770 fused_ordering(651) 00:15:52.770 fused_ordering(652) 00:15:52.770 fused_ordering(653) 00:15:52.770 fused_ordering(654) 00:15:52.770 fused_ordering(655) 00:15:52.770 fused_ordering(656) 00:15:52.770 fused_ordering(657) 00:15:52.770 fused_ordering(658) 00:15:52.770 fused_ordering(659) 00:15:52.770 fused_ordering(660) 00:15:52.770 fused_ordering(661) 00:15:52.770 fused_ordering(662) 00:15:52.770 fused_ordering(663) 00:15:52.770 fused_ordering(664) 00:15:52.770 fused_ordering(665) 00:15:52.770 fused_ordering(666) 00:15:52.770 fused_ordering(667) 00:15:52.770 fused_ordering(668) 00:15:52.770 fused_ordering(669) 00:15:52.770 fused_ordering(670) 00:15:52.770 fused_ordering(671) 00:15:52.770 fused_ordering(672) 00:15:52.770 fused_ordering(673) 00:15:52.770 fused_ordering(674) 00:15:52.770 fused_ordering(675) 00:15:52.770 fused_ordering(676) 00:15:52.770 fused_ordering(677) 00:15:52.770 fused_ordering(678) 00:15:52.770 fused_ordering(679) 00:15:52.770 fused_ordering(680) 00:15:52.770 fused_ordering(681) 00:15:52.770 fused_ordering(682) 00:15:52.770 fused_ordering(683) 00:15:52.770 fused_ordering(684) 00:15:52.770 fused_ordering(685) 00:15:52.770 fused_ordering(686) 00:15:52.770 fused_ordering(687) 00:15:52.770 fused_ordering(688) 00:15:52.770 fused_ordering(689) 00:15:52.770 fused_ordering(690) 00:15:52.770 fused_ordering(691) 00:15:52.770 fused_ordering(692) 00:15:52.770 fused_ordering(693) 00:15:52.770 fused_ordering(694) 00:15:52.770 fused_ordering(695) 00:15:52.770 fused_ordering(696) 00:15:52.770 fused_ordering(697) 00:15:52.770 fused_ordering(698) 00:15:52.770 fused_ordering(699) 00:15:52.770 fused_ordering(700) 00:15:52.770 fused_ordering(701) 00:15:52.770 fused_ordering(702) 00:15:52.770 fused_ordering(703) 00:15:52.770 fused_ordering(704) 00:15:52.770 fused_ordering(705) 00:15:52.770 fused_ordering(706) 00:15:52.770 fused_ordering(707) 00:15:52.770 fused_ordering(708) 00:15:52.770 fused_ordering(709) 00:15:52.770 fused_ordering(710) 00:15:52.770 fused_ordering(711) 00:15:52.770 fused_ordering(712) 00:15:52.770 fused_ordering(713) 00:15:52.770 fused_ordering(714) 00:15:52.770 fused_ordering(715) 00:15:52.770 fused_ordering(716) 00:15:52.770 fused_ordering(717) 00:15:52.770 fused_ordering(718) 00:15:52.770 fused_ordering(719) 00:15:52.770 fused_ordering(720) 00:15:52.770 fused_ordering(721) 00:15:52.770 fused_ordering(722) 00:15:52.770 fused_ordering(723) 00:15:52.770 fused_ordering(724) 00:15:52.770 fused_ordering(725) 00:15:52.770 fused_ordering(726) 00:15:52.770 fused_ordering(727) 00:15:52.770 fused_ordering(728) 00:15:52.770 fused_ordering(729) 00:15:52.770 fused_ordering(730) 00:15:52.770 fused_ordering(731) 00:15:52.770 fused_ordering(732) 00:15:52.770 fused_ordering(733) 00:15:52.770 fused_ordering(734) 00:15:52.770 fused_ordering(735) 00:15:52.770 fused_ordering(736) 00:15:52.770 fused_ordering(737) 00:15:52.770 fused_ordering(738) 00:15:52.770 fused_ordering(739) 00:15:52.770 fused_ordering(740) 00:15:52.770 fused_ordering(741) 00:15:52.770 fused_ordering(742) 00:15:52.770 fused_ordering(743) 00:15:52.770 fused_ordering(744) 00:15:52.770 fused_ordering(745) 00:15:52.770 fused_ordering(746) 00:15:52.770 fused_ordering(747) 00:15:52.770 fused_ordering(748) 00:15:52.770 fused_ordering(749) 00:15:52.770 fused_ordering(750) 00:15:52.770 fused_ordering(751) 00:15:52.770 fused_ordering(752) 00:15:52.770 fused_ordering(753) 00:15:52.770 fused_ordering(754) 00:15:52.770 fused_ordering(755) 00:15:52.770 fused_ordering(756) 00:15:52.770 fused_ordering(757) 00:15:52.770 fused_ordering(758) 00:15:52.770 fused_ordering(759) 00:15:52.770 fused_ordering(760) 00:15:52.770 fused_ordering(761) 00:15:52.770 fused_ordering(762) 00:15:52.770 fused_ordering(763) 00:15:52.770 fused_ordering(764) 00:15:52.770 fused_ordering(765) 00:15:52.770 fused_ordering(766) 00:15:52.770 fused_ordering(767) 00:15:52.770 fused_ordering(768) 00:15:52.770 fused_ordering(769) 00:15:52.770 fused_ordering(770) 00:15:52.770 fused_ordering(771) 00:15:52.770 fused_ordering(772) 00:15:52.770 fused_ordering(773) 00:15:52.770 fused_ordering(774) 00:15:52.770 fused_ordering(775) 00:15:52.770 fused_ordering(776) 00:15:52.770 fused_ordering(777) 00:15:52.770 fused_ordering(778) 00:15:52.770 fused_ordering(779) 00:15:52.770 fused_ordering(780) 00:15:52.770 fused_ordering(781) 00:15:52.770 fused_ordering(782) 00:15:52.770 fused_ordering(783) 00:15:52.770 fused_ordering(784) 00:15:52.770 fused_ordering(785) 00:15:52.770 fused_ordering(786) 00:15:52.770 fused_ordering(787) 00:15:52.770 fused_ordering(788) 00:15:52.770 fused_ordering(789) 00:15:52.770 fused_ordering(790) 00:15:52.770 fused_ordering(791) 00:15:52.770 fused_ordering(792) 00:15:52.770 fused_ordering(793) 00:15:52.770 fused_ordering(794) 00:15:52.770 fused_ordering(795) 00:15:52.770 fused_ordering(796) 00:15:52.770 fused_ordering(797) 00:15:52.770 fused_ordering(798) 00:15:52.770 fused_ordering(799) 00:15:52.770 fused_ordering(800) 00:15:52.770 fused_ordering(801) 00:15:52.770 fused_ordering(802) 00:15:52.770 fused_ordering(803) 00:15:52.770 fused_ordering(804) 00:15:52.770 fused_ordering(805) 00:15:52.770 fused_ordering(806) 00:15:52.770 fused_ordering(807) 00:15:52.770 fused_ordering(808) 00:15:52.770 fused_ordering(809) 00:15:52.770 fused_ordering(810) 00:15:52.770 fused_ordering(811) 00:15:52.770 fused_ordering(812) 00:15:52.770 fused_ordering(813) 00:15:52.770 fused_ordering(814) 00:15:52.770 fused_ordering(815) 00:15:52.770 fused_ordering(816) 00:15:52.770 fused_ordering(817) 00:15:52.770 fused_ordering(818) 00:15:52.770 fused_ordering(819) 00:15:52.770 fused_ordering(820) 00:15:53.029 fused_ordering(821) 00:15:53.029 fused_ordering(822) 00:15:53.029 fused_ordering(823) 00:15:53.029 fused_ordering(824) 00:15:53.029 fused_ordering(825) 00:15:53.029 fused_ordering(826) 00:15:53.029 fused_ordering(827) 00:15:53.029 fused_ordering(828) 00:15:53.029 fused_ordering(829) 00:15:53.029 fused_ordering(830) 00:15:53.029 fused_ordering(831) 00:15:53.029 fused_ordering(832) 00:15:53.029 fused_ordering(833) 00:15:53.029 fused_ordering(834) 00:15:53.029 fused_ordering(835) 00:15:53.029 fused_ordering(836) 00:15:53.029 fused_ordering(837) 00:15:53.029 fused_ordering(838) 00:15:53.029 fused_ordering(839) 00:15:53.029 fused_ordering(840) 00:15:53.029 fused_ordering(841) 00:15:53.029 fused_ordering(842) 00:15:53.029 fused_ordering(843) 00:15:53.029 fused_ordering(844) 00:15:53.029 fused_ordering(845) 00:15:53.029 fused_ordering(846) 00:15:53.029 fused_ordering(847) 00:15:53.029 fused_ordering(848) 00:15:53.029 fused_ordering(849) 00:15:53.029 fused_ordering(850) 00:15:53.029 fused_ordering(851) 00:15:53.029 fused_ordering(852) 00:15:53.029 fused_ordering(853) 00:15:53.029 fused_ordering(854) 00:15:53.029 fused_ordering(855) 00:15:53.029 fused_ordering(856) 00:15:53.029 fused_ordering(857) 00:15:53.029 fused_ordering(858) 00:15:53.029 fused_ordering(859) 00:15:53.029 fused_ordering(860) 00:15:53.029 fused_ordering(861) 00:15:53.029 fused_ordering(862) 00:15:53.029 fused_ordering(863) 00:15:53.029 fused_ordering(864) 00:15:53.029 fused_ordering(865) 00:15:53.029 fused_ordering(866) 00:15:53.029 fused_ordering(867) 00:15:53.029 fused_ordering(868) 00:15:53.029 fused_ordering(869) 00:15:53.029 fused_ordering(870) 00:15:53.029 fused_ordering(871) 00:15:53.029 fused_ordering(872) 00:15:53.029 fused_ordering(873) 00:15:53.029 fused_ordering(874) 00:15:53.029 fused_ordering(875) 00:15:53.029 fused_ordering(876) 00:15:53.029 fused_ordering(877) 00:15:53.029 fused_ordering(878) 00:15:53.029 fused_ordering(879) 00:15:53.029 fused_ordering(880) 00:15:53.029 fused_ordering(881) 00:15:53.029 fused_ordering(882) 00:15:53.029 fused_ordering(883) 00:15:53.029 fused_ordering(884) 00:15:53.029 fused_ordering(885) 00:15:53.029 fused_ordering(886) 00:15:53.029 fused_ordering(887) 00:15:53.029 fused_ordering(888) 00:15:53.029 fused_ordering(889) 00:15:53.029 fused_ordering(890) 00:15:53.029 fused_ordering(891) 00:15:53.029 fused_ordering(892) 00:15:53.029 fused_ordering(893) 00:15:53.029 fused_ordering(894) 00:15:53.029 fused_ordering(895) 00:15:53.029 fused_ordering(896) 00:15:53.029 fused_ordering(897) 00:15:53.029 fused_ordering(898) 00:15:53.029 fused_ordering(899) 00:15:53.029 fused_ordering(900) 00:15:53.029 fused_ordering(901) 00:15:53.029 fused_ordering(902) 00:15:53.029 fused_ordering(903) 00:15:53.029 fused_ordering(904) 00:15:53.029 fused_ordering(905) 00:15:53.029 fused_ordering(906) 00:15:53.029 fused_ordering(907) 00:15:53.029 fused_ordering(908) 00:15:53.029 fused_ordering(909) 00:15:53.029 fused_ordering(910) 00:15:53.029 fused_ordering(911) 00:15:53.029 fused_ordering(912) 00:15:53.029 fused_ordering(913) 00:15:53.029 fused_ordering(914) 00:15:53.029 fused_ordering(915) 00:15:53.029 fused_ordering(916) 00:15:53.029 fused_ordering(917) 00:15:53.029 fused_ordering(918) 00:15:53.029 fused_ordering(919) 00:15:53.029 fused_ordering(920) 00:15:53.029 fused_ordering(921) 00:15:53.029 fused_ordering(922) 00:15:53.029 fused_ordering(923) 00:15:53.029 fused_ordering(924) 00:15:53.029 fused_ordering(925) 00:15:53.029 fused_ordering(926) 00:15:53.029 fused_ordering(927) 00:15:53.029 fused_ordering(928) 00:15:53.029 fused_ordering(929) 00:15:53.029 fused_ordering(930) 00:15:53.029 fused_ordering(931) 00:15:53.029 fused_ordering(932) 00:15:53.029 fused_ordering(933) 00:15:53.029 fused_ordering(934) 00:15:53.029 fused_ordering(935) 00:15:53.029 fused_ordering(936) 00:15:53.029 fused_ordering(937) 00:15:53.029 fused_ordering(938) 00:15:53.029 fused_ordering(939) 00:15:53.029 fused_ordering(940) 00:15:53.029 fused_ordering(941) 00:15:53.029 fused_ordering(942) 00:15:53.029 fused_ordering(943) 00:15:53.029 fused_ordering(944) 00:15:53.029 fused_ordering(945) 00:15:53.029 fused_ordering(946) 00:15:53.029 fused_ordering(947) 00:15:53.029 fused_ordering(948) 00:15:53.029 fused_ordering(949) 00:15:53.029 fused_ordering(950) 00:15:53.029 fused_ordering(951) 00:15:53.029 fused_ordering(952) 00:15:53.029 fused_ordering(953) 00:15:53.029 fused_ordering(954) 00:15:53.029 fused_ordering(955) 00:15:53.029 fused_ordering(956) 00:15:53.029 fused_ordering(957) 00:15:53.029 fused_ordering(958) 00:15:53.029 fused_ordering(959) 00:15:53.029 fused_ordering(960) 00:15:53.029 fused_ordering(961) 00:15:53.029 fused_ordering(962) 00:15:53.029 fused_ordering(963) 00:15:53.029 fused_ordering(964) 00:15:53.029 fused_ordering(965) 00:15:53.029 fused_ordering(966) 00:15:53.029 fused_ordering(967) 00:15:53.029 fused_ordering(968) 00:15:53.029 fused_ordering(969) 00:15:53.029 fused_ordering(970) 00:15:53.029 fused_ordering(971) 00:15:53.029 fused_ordering(972) 00:15:53.029 fused_ordering(973) 00:15:53.029 fused_ordering(974) 00:15:53.029 fused_ordering(975) 00:15:53.029 fused_ordering(976) 00:15:53.029 fused_ordering(977) 00:15:53.029 fused_ordering(978) 00:15:53.029 fused_ordering(979) 00:15:53.029 fused_ordering(980) 00:15:53.029 fused_ordering(981) 00:15:53.029 fused_ordering(982) 00:15:53.029 fused_ordering(983) 00:15:53.029 fused_ordering(984) 00:15:53.029 fused_ordering(985) 00:15:53.029 fused_ordering(986) 00:15:53.029 fused_ordering(987) 00:15:53.029 fused_ordering(988) 00:15:53.029 fused_ordering(989) 00:15:53.029 fused_ordering(990) 00:15:53.029 fused_ordering(991) 00:15:53.029 fused_ordering(992) 00:15:53.029 fused_ordering(993) 00:15:53.029 fused_ordering(994) 00:15:53.029 fused_ordering(995) 00:15:53.029 fused_ordering(996) 00:15:53.029 fused_ordering(997) 00:15:53.029 fused_ordering(998) 00:15:53.029 fused_ordering(999) 00:15:53.029 fused_ordering(1000) 00:15:53.029 fused_ordering(1001) 00:15:53.029 fused_ordering(1002) 00:15:53.029 fused_ordering(1003) 00:15:53.029 fused_ordering(1004) 00:15:53.029 fused_ordering(1005) 00:15:53.029 fused_ordering(1006) 00:15:53.029 fused_ordering(1007) 00:15:53.029 fused_ordering(1008) 00:15:53.029 fused_ordering(1009) 00:15:53.029 fused_ordering(1010) 00:15:53.029 fused_ordering(1011) 00:15:53.029 fused_ordering(1012) 00:15:53.029 fused_ordering(1013) 00:15:53.029 fused_ordering(1014) 00:15:53.029 fused_ordering(1015) 00:15:53.029 fused_ordering(1016) 00:15:53.029 fused_ordering(1017) 00:15:53.029 fused_ordering(1018) 00:15:53.029 fused_ordering(1019) 00:15:53.029 fused_ordering(1020) 00:15:53.029 fused_ordering(1021) 00:15:53.029 fused_ordering(1022) 00:15:53.029 fused_ordering(1023) 00:15:53.288 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:15:53.288 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:15:53.288 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:53.288 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:15:53.288 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:53.288 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:15:53.288 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:53.288 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:53.288 rmmod nvme_tcp 00:15:53.288 rmmod nvme_fabrics 00:15:53.288 rmmod nvme_keyring 00:15:53.288 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:53.288 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:15:53.288 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:15:53.288 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 79744 ']' 00:15:53.288 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 79744 00:15:53.288 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # '[' -z 79744 ']' 00:15:53.288 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # kill -0 79744 00:15:53.288 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # uname 00:15:53.288 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:53.288 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79744 00:15:53.288 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:15:53.288 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:15:53.288 killing process with pid 79744 00:15:53.288 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79744' 00:15:53.288 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@971 -- # kill 79744 00:15:53.288 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@976 -- # wait 79744 00:15:54.221 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:54.221 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:54.221 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:54.221 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:15:54.221 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:15:54.221 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:15:54.221 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:54.221 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:54.221 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:54.221 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:54.221 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:54.221 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:54.221 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:54.221 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:54.221 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:54.221 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:54.221 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:54.221 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:54.221 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:54.222 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:54.222 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:54.222 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:54.222 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:54.222 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:54.222 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:54.222 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:54.481 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@300 -- # return 0 00:15:54.481 00:15:54.481 real 0m4.687s 00:15:54.481 user 0m5.453s 00:15:54.481 sys 0m1.172s 00:15:54.481 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:54.481 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:54.481 ************************************ 00:15:54.481 END TEST nvmf_fused_ordering 00:15:54.481 ************************************ 00:15:54.481 14:20:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:15:54.481 14:20:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:54.481 14:20:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:54.481 14:20:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:54.481 ************************************ 00:15:54.481 START TEST nvmf_ns_masking 00:15:54.481 ************************************ 00:15:54.481 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1127 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:15:54.481 * Looking for test storage... 00:15:54.481 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:54.481 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:54.481 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:54.481 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lcov --version 00:15:54.481 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:54.481 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:54.481 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:54.481 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:54.481 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:15:54.481 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:15:54.481 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:15:54.481 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:15:54.481 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:15:54.481 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:15:54.481 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:15:54.481 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:54.481 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:15:54.481 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:15:54.481 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:54.481 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:54.481 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:15:54.481 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:15:54.481 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:54.481 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:15:54.481 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:15:54.481 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:15:54.481 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:15:54.481 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:54.481 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:15:54.481 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:15:54.481 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:54.481 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:54.481 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:15:54.481 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:54.481 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:54.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.481 --rc genhtml_branch_coverage=1 00:15:54.481 --rc genhtml_function_coverage=1 00:15:54.481 --rc genhtml_legend=1 00:15:54.481 --rc geninfo_all_blocks=1 00:15:54.481 --rc geninfo_unexecuted_blocks=1 00:15:54.481 00:15:54.481 ' 00:15:54.481 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:54.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.481 --rc genhtml_branch_coverage=1 00:15:54.481 --rc genhtml_function_coverage=1 00:15:54.481 --rc genhtml_legend=1 00:15:54.481 --rc geninfo_all_blocks=1 00:15:54.481 --rc geninfo_unexecuted_blocks=1 00:15:54.481 00:15:54.481 ' 00:15:54.481 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:54.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.481 --rc genhtml_branch_coverage=1 00:15:54.481 --rc genhtml_function_coverage=1 00:15:54.481 --rc genhtml_legend=1 00:15:54.481 --rc geninfo_all_blocks=1 00:15:54.481 --rc geninfo_unexecuted_blocks=1 00:15:54.481 00:15:54.481 ' 00:15:54.481 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:54.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.481 --rc genhtml_branch_coverage=1 00:15:54.481 --rc genhtml_function_coverage=1 00:15:54.481 --rc genhtml_legend=1 00:15:54.481 --rc geninfo_all_blocks=1 00:15:54.481 --rc geninfo_unexecuted_blocks=1 00:15:54.481 00:15:54.481 ' 00:15:54.481 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:54.482 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=488c825a-c8db-4f40-8cc3-d9c33391a9bf 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=4d9b5630-e140-4547-aa6e-3938f8341fa2 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=69f6e218-7b20-4284-ab61-76acb3d3c0c5 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:54.482 Cannot find device "nvmf_init_br" 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@162 -- # true 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:54.482 Cannot find device "nvmf_init_br2" 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@163 -- # true 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:54.482 Cannot find device "nvmf_tgt_br" 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@164 -- # true 00:15:54.482 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:54.740 Cannot find device "nvmf_tgt_br2" 00:15:54.740 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@165 -- # true 00:15:54.740 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:54.740 Cannot find device "nvmf_init_br" 00:15:54.740 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@166 -- # true 00:15:54.740 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:54.740 Cannot find device "nvmf_init_br2" 00:15:54.740 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@167 -- # true 00:15:54.740 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:54.740 Cannot find device "nvmf_tgt_br" 00:15:54.740 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@168 -- # true 00:15:54.740 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:54.740 Cannot find device "nvmf_tgt_br2" 00:15:54.740 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@169 -- # true 00:15:54.740 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:54.740 Cannot find device "nvmf_br" 00:15:54.740 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@170 -- # true 00:15:54.740 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:54.740 Cannot find device "nvmf_init_if" 00:15:54.740 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@171 -- # true 00:15:54.740 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:54.740 Cannot find device "nvmf_init_if2" 00:15:54.740 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@172 -- # true 00:15:54.740 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:54.740 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:54.740 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@173 -- # true 00:15:54.740 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:54.740 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:54.740 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@174 -- # true 00:15:54.740 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:54.740 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:54.740 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:54.740 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:54.740 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:54.740 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:54.740 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:54.741 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:54.741 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:54.741 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:54.741 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:54.741 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:54.741 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:54.741 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:54.741 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:54.741 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:54.741 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:54.741 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:54.741 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:54.741 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:54.741 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:54.741 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:54.741 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:54.741 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:54.741 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:54.741 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:54.741 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:54.741 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:54.741 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:54.741 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:54.741 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:54.741 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:54.999 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:54.999 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:54.999 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.088 ms 00:15:54.999 00:15:54.999 --- 10.0.0.3 ping statistics --- 00:15:54.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:54.999 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:15:54.999 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:54.999 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:54.999 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.036 ms 00:15:54.999 00:15:54.999 --- 10.0.0.4 ping statistics --- 00:15:54.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:54.999 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:15:54.999 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:54.999 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:54.999 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:15:54.999 00:15:54.999 --- 10.0.0.1 ping statistics --- 00:15:54.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:54.999 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:15:54.999 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:54.999 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:54.999 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.034 ms 00:15:54.999 00:15:54.999 --- 10.0.0.2 ping statistics --- 00:15:54.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:54.999 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:15:54.999 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:54.999 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@461 -- # return 0 00:15:54.999 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:54.999 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:54.999 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:54.999 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:54.999 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:54.999 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:54.999 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:54.999 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:15:54.999 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:54.999 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:54.999 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:54.999 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=80068 00:15:54.999 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 80068 00:15:54.999 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 80068 ']' 00:15:54.999 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:54.999 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:54.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:54.999 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:54.999 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:54.999 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:54.999 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:54.999 [2024-11-06 14:20:22.486293] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:15:54.999 [2024-11-06 14:20:22.486427] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:55.257 [2024-11-06 14:20:22.644761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:55.257 [2024-11-06 14:20:22.744120] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:55.257 [2024-11-06 14:20:22.744173] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:55.257 [2024-11-06 14:20:22.744184] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:55.257 [2024-11-06 14:20:22.744200] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:55.257 [2024-11-06 14:20:22.744214] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:55.257 [2024-11-06 14:20:22.745173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.822 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:55.822 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:15:55.822 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:55.822 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:55.822 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:55.822 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:55.822 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:56.080 [2024-11-06 14:20:23.543472] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:56.080 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:15:56.080 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:15:56.080 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:56.338 Malloc1 00:15:56.338 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:56.596 Malloc2 00:15:56.596 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:56.854 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:15:57.110 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:57.110 [2024-11-06 14:20:24.692400] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:57.110 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:15:57.110 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 69f6e218-7b20-4284-ab61-76acb3d3c0c5 -a 10.0.0.3 -s 4420 -i 4 00:15:57.372 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:15:57.372 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:15:57.372 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:15:57.372 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:15:57.372 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:15:59.269 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:15:59.269 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:15:59.269 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:15:59.269 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:15:59.269 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:15:59.269 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:15:59.269 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:59.269 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:59.269 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:59.269 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:59.269 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:15:59.269 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:59.269 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:59.269 [ 0]:0x1 00:15:59.269 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:59.269 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:59.269 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c0275871a016457199cfae291da97b8f 00:15:59.269 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c0275871a016457199cfae291da97b8f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:59.269 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:15:59.527 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:15:59.528 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:59.528 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:59.528 [ 0]:0x1 00:15:59.528 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:59.528 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:59.528 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c0275871a016457199cfae291da97b8f 00:15:59.528 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c0275871a016457199cfae291da97b8f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:59.528 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:15:59.528 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:59.528 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:59.528 [ 1]:0x2 00:15:59.528 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:59.528 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:59.785 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=083fb791f7b44ecdb7ee56cc7e802643 00:15:59.785 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 083fb791f7b44ecdb7ee56cc7e802643 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:59.785 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:15:59.785 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:59.785 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:59.785 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:00.043 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:16:00.301 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:16:00.301 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 69f6e218-7b20-4284-ab61-76acb3d3c0c5 -a 10.0.0.3 -s 4420 -i 4 00:16:00.301 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:16:00.301 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:16:00.301 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:16:00.301 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 1 ]] 00:16:00.301 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=1 00:16:00.301 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:16:02.842 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:16:02.842 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:16:02.842 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:16:02.842 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:16:02.842 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:16:02.842 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:16:02.842 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:02.842 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:02.842 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:02.842 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:02.842 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:16:02.842 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:02.842 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:16:02.842 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:16:02.842 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:02.842 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:16:02.842 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:02.842 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:16:02.842 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:02.842 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:02.842 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:02.842 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:02.842 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:02.842 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:02.842 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:02.842 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:02.842 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:02.842 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:02.842 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:16:02.842 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:02.842 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:02.842 [ 0]:0x2 00:16:02.842 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:02.842 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:02.842 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=083fb791f7b44ecdb7ee56cc7e802643 00:16:02.842 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 083fb791f7b44ecdb7ee56cc7e802643 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:02.842 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:02.842 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:16:02.842 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:02.842 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:02.842 [ 0]:0x1 00:16:02.842 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:02.842 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:02.842 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c0275871a016457199cfae291da97b8f 00:16:02.842 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c0275871a016457199cfae291da97b8f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:02.842 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:16:02.842 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:02.842 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:02.842 [ 1]:0x2 00:16:02.842 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:02.842 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:02.842 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=083fb791f7b44ecdb7ee56cc7e802643 00:16:02.842 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 083fb791f7b44ecdb7ee56cc7e802643 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:02.842 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:02.842 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:16:02.842 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:02.842 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:16:02.842 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:16:02.842 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:02.842 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:16:03.100 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:03.100 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:16:03.100 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:03.100 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:03.100 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:03.100 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:03.100 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:03.100 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:03.100 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:03.100 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:03.100 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:03.100 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:03.100 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:16:03.100 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:03.100 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:03.100 [ 0]:0x2 00:16:03.100 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:03.100 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:03.100 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=083fb791f7b44ecdb7ee56cc7e802643 00:16:03.100 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 083fb791f7b44ecdb7ee56cc7e802643 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:03.100 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:16:03.100 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:03.100 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:03.100 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:03.358 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:16:03.358 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 69f6e218-7b20-4284-ab61-76acb3d3c0c5 -a 10.0.0.3 -s 4420 -i 4 00:16:03.358 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:03.358 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:16:03.358 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:16:03.358 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 2 ]] 00:16:03.358 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=2 00:16:03.358 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:16:05.293 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:16:05.293 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:16:05.293 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:16:05.293 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=2 00:16:05.293 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:16:05.293 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:16:05.293 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:05.293 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:05.551 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:05.551 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:05.551 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:16:05.551 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:05.551 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:05.551 [ 0]:0x1 00:16:05.551 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:05.551 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:05.551 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c0275871a016457199cfae291da97b8f 00:16:05.551 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c0275871a016457199cfae291da97b8f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:05.551 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:16:05.551 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:05.551 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:05.551 [ 1]:0x2 00:16:05.551 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:05.551 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:05.551 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=083fb791f7b44ecdb7ee56cc7e802643 00:16:05.551 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 083fb791f7b44ecdb7ee56cc7e802643 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:05.551 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:05.809 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:16:05.809 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:05.809 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:16:05.809 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:16:05.809 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:05.809 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:16:05.809 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:05.809 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:16:05.809 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:05.809 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:05.809 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:05.809 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:05.809 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:05.809 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:05.809 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:05.809 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:05.809 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:05.809 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:05.809 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:16:05.809 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:05.809 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:05.809 [ 0]:0x2 00:16:05.809 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:05.809 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:05.809 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=083fb791f7b44ecdb7ee56cc7e802643 00:16:05.809 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 083fb791f7b44ecdb7ee56cc7e802643 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:05.809 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:05.809 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:05.809 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:05.809 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:05.809 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:05.809 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:05.809 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:05.809 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:05.809 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:05.809 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:05.809 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:05.809 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:06.068 [2024-11-06 14:20:33.516374] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:16:06.068 2024/11/06 14:20:33 error on JSON-RPC call, method: nvmf_ns_remove_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 nsid:2], err: error received for nvmf_ns_remove_host method, err: Code=-32602 Msg=Invalid parameters 00:16:06.068 request: 00:16:06.068 { 00:16:06.068 "method": "nvmf_ns_remove_host", 00:16:06.068 "params": { 00:16:06.068 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:06.068 "nsid": 2, 00:16:06.068 "host": "nqn.2016-06.io.spdk:host1" 00:16:06.068 } 00:16:06.068 } 00:16:06.068 Got JSON-RPC error response 00:16:06.068 GoRPCClient: error on JSON-RPC call 00:16:06.068 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:06.068 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:06.068 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:06.068 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:06.068 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:16:06.068 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:06.068 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:16:06.068 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:16:06.068 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:06.068 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:16:06.068 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:06.068 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:16:06.068 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:06.068 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:06.068 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:06.068 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:06.068 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:06.068 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:06.068 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:06.068 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:06.068 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:06.068 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:06.068 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:16:06.068 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:06.068 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:06.068 [ 0]:0x2 00:16:06.068 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:06.068 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:06.068 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=083fb791f7b44ecdb7ee56cc7e802643 00:16:06.068 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 083fb791f7b44ecdb7ee56cc7e802643 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:06.068 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:16:06.068 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:06.068 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:06.068 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=80438 00:16:06.068 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:16:06.068 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:16:06.068 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 80438 /var/tmp/host.sock 00:16:06.068 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 80438 ']' 00:16:06.068 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:16:06.068 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:06.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:06.068 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:06.068 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:06.068 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:06.326 [2024-11-06 14:20:33.745448] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:16:06.326 [2024-11-06 14:20:33.745570] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80438 ] 00:16:06.326 [2024-11-06 14:20:33.903020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:06.584 [2024-11-06 14:20:34.008228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:07.150 14:20:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:07.150 14:20:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:16:07.151 14:20:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:07.408 14:20:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:07.666 14:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 488c825a-c8db-4f40-8cc3-d9c33391a9bf 00:16:07.666 14:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:16:07.666 14:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 488C825AC8DB4F408CC3D9C33391A9BF -i 00:16:07.666 14:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 4d9b5630-e140-4547-aa6e-3938f8341fa2 00:16:07.666 14:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:16:07.666 14:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 4D9B5630E1404547AA6E3938F8341FA2 -i 00:16:07.923 14:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:08.181 14:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:16:08.437 14:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:16:08.437 14:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:16:08.695 nvme0n1 00:16:08.695 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:16:08.695 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:16:08.954 nvme1n2 00:16:08.954 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:16:08.954 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:16:08.954 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:16:08.954 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:16:08.954 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:16:09.213 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:16:09.213 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:16:09.213 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:16:09.213 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:16:09.471 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 488c825a-c8db-4f40-8cc3-d9c33391a9bf == \4\8\8\c\8\2\5\a\-\c\8\d\b\-\4\f\4\0\-\8\c\c\3\-\d\9\c\3\3\3\9\1\a\9\b\f ]] 00:16:09.471 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:16:09.471 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:16:09.471 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:16:09.729 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 4d9b5630-e140-4547-aa6e-3938f8341fa2 == \4\d\9\b\5\6\3\0\-\e\1\4\0\-\4\5\4\7\-\a\a\6\e\-\3\9\3\8\f\8\3\4\1\f\a\2 ]] 00:16:09.729 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:09.987 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:09.987 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 488c825a-c8db-4f40-8cc3-d9c33391a9bf 00:16:09.987 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:16:09.987 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 488C825AC8DB4F408CC3D9C33391A9BF 00:16:09.987 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:09.987 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 488C825AC8DB4F408CC3D9C33391A9BF 00:16:09.987 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:09.987 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:09.987 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:09.987 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:09.987 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:09.987 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:09.987 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:09.987 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:09.987 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 488C825AC8DB4F408CC3D9C33391A9BF 00:16:10.246 [2024-11-06 14:20:37.777891] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:16:10.246 [2024-11-06 14:20:37.777939] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:16:10.246 [2024-11-06 14:20:37.777964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.246 2024/11/06 14:20:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:invalid nguid:488C825AC8DB4F408CC3D9C33391A9BF no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.246 request: 00:16:10.246 { 00:16:10.246 "method": "nvmf_subsystem_add_ns", 00:16:10.246 "params": { 00:16:10.246 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:10.246 "namespace": { 00:16:10.246 "bdev_name": "invalid", 00:16:10.246 "nsid": 1, 00:16:10.246 "nguid": "488C825AC8DB4F408CC3D9C33391A9BF", 00:16:10.246 "no_auto_visible": false 00:16:10.246 } 00:16:10.246 } 00:16:10.246 } 00:16:10.246 Got JSON-RPC error response 00:16:10.246 GoRPCClient: error on JSON-RPC call 00:16:10.246 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:10.246 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:10.246 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:10.246 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:10.246 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 488c825a-c8db-4f40-8cc3-d9c33391a9bf 00:16:10.246 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:16:10.246 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 488C825AC8DB4F408CC3D9C33391A9BF -i 00:16:10.504 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:16:12.403 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:16:12.403 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:16:12.403 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:16:12.661 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:16:12.661 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 80438 00:16:12.661 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 80438 ']' 00:16:12.661 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 80438 00:16:12.661 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:16:12.661 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:12.661 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80438 00:16:12.661 killing process with pid 80438 00:16:12.661 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:16:12.661 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:16:12.661 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80438' 00:16:12.661 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 80438 00:16:12.661 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 80438 00:16:14.559 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:14.559 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:16:14.559 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:16:14.559 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:14.559 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:16:14.559 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:14.559 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:16:14.559 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:14.559 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:14.559 rmmod nvme_tcp 00:16:14.559 rmmod nvme_fabrics 00:16:14.559 rmmod nvme_keyring 00:16:14.559 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:14.559 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:16:14.559 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:16:14.559 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 80068 ']' 00:16:14.559 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 80068 00:16:14.559 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 80068 ']' 00:16:14.559 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 80068 00:16:14.559 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:16:14.559 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:14.559 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80068 00:16:14.559 killing process with pid 80068 00:16:14.559 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:14.559 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:14.559 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80068' 00:16:14.559 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 80068 00:16:14.560 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 80068 00:16:15.493 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:15.493 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:15.493 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:15.493 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:16:15.493 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:16:15.493 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:16:15.493 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:15.493 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:15.493 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:15.493 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:15.493 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:15.493 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:15.493 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:15.493 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:15.493 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:15.493 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:15.493 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:15.493 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:15.493 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:15.493 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:15.493 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:15.493 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:15.493 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:15.493 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:15.493 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:15.493 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:15.493 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@300 -- # return 0 00:16:15.493 00:16:15.493 real 0m21.117s 00:16:15.493 user 0m34.485s 00:16:15.493 sys 0m2.487s 00:16:15.493 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:15.493 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:15.493 ************************************ 00:16:15.493 END TEST nvmf_ns_masking 00:16:15.493 ************************************ 00:16:15.493 14:20:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 0 -eq 1 ]] 00:16:15.493 14:20:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:16:15.493 14:20:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:16:15.493 14:20:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:15.493 14:20:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:15.493 14:20:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:15.493 ************************************ 00:16:15.493 START TEST nvmf_vfio_user 00:16:15.493 ************************************ 00:16:15.493 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:16:15.493 * Looking for test storage... 00:16:15.751 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:15.751 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:15.751 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lcov --version 00:16:15.751 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:15.751 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:15.751 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:15.751 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:15.751 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:15.752 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:16:15.752 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:16:15.752 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:16:15.752 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:16:15.752 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:16:15.752 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:16:15.752 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:16:15.752 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:15.752 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:16:15.752 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:16:15.752 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:15.752 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:15.752 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:16:15.752 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:16:15.752 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:15.752 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:16:15.752 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:16:15.752 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:16:15.752 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:16:15.752 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:15.752 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:16:15.752 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:16:15.752 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:15.752 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:15.752 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:16:15.752 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:15.752 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:15.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.752 --rc genhtml_branch_coverage=1 00:16:15.752 --rc genhtml_function_coverage=1 00:16:15.752 --rc genhtml_legend=1 00:16:15.752 --rc geninfo_all_blocks=1 00:16:15.752 --rc geninfo_unexecuted_blocks=1 00:16:15.752 00:16:15.752 ' 00:16:15.752 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:15.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.752 --rc genhtml_branch_coverage=1 00:16:15.752 --rc genhtml_function_coverage=1 00:16:15.752 --rc genhtml_legend=1 00:16:15.752 --rc geninfo_all_blocks=1 00:16:15.752 --rc geninfo_unexecuted_blocks=1 00:16:15.752 00:16:15.752 ' 00:16:15.752 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:15.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.752 --rc genhtml_branch_coverage=1 00:16:15.752 --rc genhtml_function_coverage=1 00:16:15.752 --rc genhtml_legend=1 00:16:15.752 --rc geninfo_all_blocks=1 00:16:15.752 --rc geninfo_unexecuted_blocks=1 00:16:15.752 00:16:15.752 ' 00:16:15.752 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:15.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.752 --rc genhtml_branch_coverage=1 00:16:15.752 --rc genhtml_function_coverage=1 00:16:15.752 --rc genhtml_legend=1 00:16:15.752 --rc geninfo_all_blocks=1 00:16:15.752 --rc geninfo_unexecuted_blocks=1 00:16:15.752 00:16:15.752 ' 00:16:15.752 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:15.752 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:16:15.752 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:15.752 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:15.752 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:15.752 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:15.752 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:15.752 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:15.752 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:15.752 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:15.752 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:15.752 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:15.752 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:16:15.752 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:16:15.752 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:15.752 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:15.752 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:15.752 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:15.752 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:15.752 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:16:15.752 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:15.752 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:15.752 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:15.752 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.752 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.752 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.752 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:16:15.752 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.752 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:16:15.752 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:15.752 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:15.752 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:15.752 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:15.752 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:15.752 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:15.752 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:15.752 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:15.752 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:15.752 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:15.752 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:15.752 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:15.752 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:16:15.752 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:15.752 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:15.752 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:15.752 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:16:15.752 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:16:15.752 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:16:15.752 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:16:15.753 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=80813 00:16:15.753 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 80813' 00:16:15.753 Process pid: 80813 00:16:15.753 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:15.753 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 80813 00:16:15.753 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # '[' -z 80813 ']' 00:16:15.753 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:15.753 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:16:15.753 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:15.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:15.753 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:15.753 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:15.753 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:15.753 [2024-11-06 14:20:43.287697] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:16:15.753 [2024-11-06 14:20:43.287924] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:16.011 [2024-11-06 14:20:43.442473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:16.011 [2024-11-06 14:20:43.524713] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:16.011 [2024-11-06 14:20:43.524845] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:16.011 [2024-11-06 14:20:43.524907] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:16.011 [2024-11-06 14:20:43.524931] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:16.011 [2024-11-06 14:20:43.524958] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:16.011 [2024-11-06 14:20:43.526088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:16.011 [2024-11-06 14:20:43.526184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:16.011 [2024-11-06 14:20:43.526404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:16.011 [2024-11-06 14:20:43.526551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:16.576 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:16.576 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@866 -- # return 0 00:16:16.576 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:17.511 14:20:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:16:17.768 14:20:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:17.768 14:20:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:17.768 14:20:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:17.768 14:20:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:17.768 14:20:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:18.031 Malloc1 00:16:18.031 14:20:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:18.290 14:20:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:18.548 14:20:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:18.806 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:18.806 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:18.806 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:18.806 Malloc2 00:16:19.064 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:19.064 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:19.322 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:19.580 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:16:19.580 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:16:19.580 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:19.580 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:19.580 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:16:19.581 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:19.581 [2024-11-06 14:20:47.125942] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:16:19.581 [2024-11-06 14:20:47.126042] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80949 ] 00:16:19.840 [2024-11-06 14:20:47.288600] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:16:19.840 [2024-11-06 14:20:47.291137] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:19.840 [2024-11-06 14:20:47.291165] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fa9a4139000 00:16:19.840 [2024-11-06 14:20:47.292100] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:19.840 [2024-11-06 14:20:47.293104] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:19.840 [2024-11-06 14:20:47.294108] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:19.840 [2024-11-06 14:20:47.295117] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:19.840 [2024-11-06 14:20:47.296120] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:19.840 [2024-11-06 14:20:47.297111] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:19.840 [2024-11-06 14:20:47.298120] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:19.841 [2024-11-06 14:20:47.299132] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:19.841 [2024-11-06 14:20:47.300133] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:19.841 [2024-11-06 14:20:47.300150] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fa9a412e000 00:16:19.841 [2024-11-06 14:20:47.301245] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:19.841 [2024-11-06 14:20:47.313835] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:16:19.841 [2024-11-06 14:20:47.313879] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:16:19.841 [2024-11-06 14:20:47.316195] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:16:19.841 [2024-11-06 14:20:47.316292] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:19.841 [2024-11-06 14:20:47.316695] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:16:19.841 [2024-11-06 14:20:47.316722] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:16:19.841 [2024-11-06 14:20:47.316729] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:16:19.841 [2024-11-06 14:20:47.317961] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:16:19.841 [2024-11-06 14:20:47.317980] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:16:19.841 [2024-11-06 14:20:47.317991] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:16:19.841 [2024-11-06 14:20:47.318174] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:16:19.841 [2024-11-06 14:20:47.318193] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:16:19.841 [2024-11-06 14:20:47.318202] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:16:19.841 [2024-11-06 14:20:47.319178] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:16:19.841 [2024-11-06 14:20:47.319195] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:19.841 [2024-11-06 14:20:47.320185] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:16:19.841 [2024-11-06 14:20:47.320201] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:16:19.841 [2024-11-06 14:20:47.320208] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:16:19.841 [2024-11-06 14:20:47.320216] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:19.841 [2024-11-06 14:20:47.320323] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:16:19.841 [2024-11-06 14:20:47.320328] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:19.841 [2024-11-06 14:20:47.320337] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:16:19.841 [2024-11-06 14:20:47.321192] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:16:19.841 [2024-11-06 14:20:47.322191] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:16:19.841 [2024-11-06 14:20:47.323197] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:16:19.841 [2024-11-06 14:20:47.324192] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:19.841 [2024-11-06 14:20:47.324269] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:19.841 [2024-11-06 14:20:47.325198] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:16:19.841 [2024-11-06 14:20:47.325215] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:19.841 [2024-11-06 14:20:47.325221] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:16:19.841 [2024-11-06 14:20:47.325241] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:16:19.841 [2024-11-06 14:20:47.325249] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:16:19.841 [2024-11-06 14:20:47.325272] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:19.841 [2024-11-06 14:20:47.325278] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:19.841 [2024-11-06 14:20:47.325285] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:19.841 [2024-11-06 14:20:47.325305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:19.841 [2024-11-06 14:20:47.325358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:19.841 [2024-11-06 14:20:47.325370] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:16:19.841 [2024-11-06 14:20:47.325378] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:16:19.841 [2024-11-06 14:20:47.325382] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:16:19.841 [2024-11-06 14:20:47.325389] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:19.841 [2024-11-06 14:20:47.325395] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:16:19.841 [2024-11-06 14:20:47.325401] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:16:19.841 [2024-11-06 14:20:47.325406] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:16:19.841 [2024-11-06 14:20:47.325419] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:16:19.841 [2024-11-06 14:20:47.325430] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:19.841 [2024-11-06 14:20:47.325441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:19.841 [2024-11-06 14:20:47.325456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:19.841 [2024-11-06 14:20:47.325466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:19.841 [2024-11-06 14:20:47.325475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:19.841 [2024-11-06 14:20:47.325485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:19.841 [2024-11-06 14:20:47.325489] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:16:19.841 [2024-11-06 14:20:47.325502] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:19.841 [2024-11-06 14:20:47.325511] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:19.841 [2024-11-06 14:20:47.325521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:19.841 [2024-11-06 14:20:47.325526] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:16:19.841 [2024-11-06 14:20:47.325533] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:19.841 [2024-11-06 14:20:47.325541] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:16:19.841 [2024-11-06 14:20:47.325549] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:16:19.841 [2024-11-06 14:20:47.325557] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:19.841 [2024-11-06 14:20:47.325572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:19.841 [2024-11-06 14:20:47.325636] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:16:19.841 [2024-11-06 14:20:47.325650] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:16:19.841 [2024-11-06 14:20:47.325660] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:19.841 [2024-11-06 14:20:47.325666] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:19.841 [2024-11-06 14:20:47.325670] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:19.841 [2024-11-06 14:20:47.325678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:19.841 [2024-11-06 14:20:47.325692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:19.841 [2024-11-06 14:20:47.325711] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:16:19.841 [2024-11-06 14:20:47.325721] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:16:19.841 [2024-11-06 14:20:47.325736] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:16:19.841 [2024-11-06 14:20:47.325746] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:19.841 [2024-11-06 14:20:47.325755] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:19.842 [2024-11-06 14:20:47.325759] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:19.842 [2024-11-06 14:20:47.325769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:19.842 [2024-11-06 14:20:47.325793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:19.842 [2024-11-06 14:20:47.325815] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:19.842 [2024-11-06 14:20:47.325827] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:19.842 [2024-11-06 14:20:47.325842] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:19.842 [2024-11-06 14:20:47.325846] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:19.842 [2024-11-06 14:20:47.325854] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:19.842 [2024-11-06 14:20:47.325862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:19.842 [2024-11-06 14:20:47.325876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:19.842 [2024-11-06 14:20:47.325893] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:19.842 [2024-11-06 14:20:47.325901] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:16:19.842 [2024-11-06 14:20:47.325914] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:16:19.842 [2024-11-06 14:20:47.325921] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:16:19.842 [2024-11-06 14:20:47.325926] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:19.842 [2024-11-06 14:20:47.325933] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:16:19.842 [2024-11-06 14:20:47.325940] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:16:19.842 [2024-11-06 14:20:47.325946] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:16:19.842 [2024-11-06 14:20:47.325961] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:16:19.842 [2024-11-06 14:20:47.325991] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:19.842 [2024-11-06 14:20:47.326000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:19.842 [2024-11-06 14:20:47.326013] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:19.842 [2024-11-06 14:20:47.326019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:19.842 [2024-11-06 14:20:47.326031] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:19.842 [2024-11-06 14:20:47.326037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:19.842 [2024-11-06 14:20:47.326049] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:19.842 [2024-11-06 14:20:47.326057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:19.842 [2024-11-06 14:20:47.326074] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:19.842 [2024-11-06 14:20:47.326081] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:19.842 [2024-11-06 14:20:47.326086] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:19.842 [2024-11-06 14:20:47.326090] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:19.842 [2024-11-06 14:20:47.326095] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:16:19.842 [2024-11-06 14:20:47.326103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:19.842 [2024-11-06 14:20:47.326112] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:19.842 [2024-11-06 14:20:47.326118] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:19.842 [2024-11-06 14:20:47.326125] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:19.842 [2024-11-06 14:20:47.326131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:19.842 [2024-11-06 14:20:47.326143] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:19.842 [2024-11-06 14:20:47.326147] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:19.842 [2024-11-06 14:20:47.326154] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:19.842 [2024-11-06 14:20:47.326160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:19.842 [2024-11-06 14:20:47.326171] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:19.842 [2024-11-06 14:20:47.326175] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:19.842 [2024-11-06 14:20:47.326180] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:19.842 [2024-11-06 14:20:47.326188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:19.842 [2024-11-06 14:20:47.326198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:19.842 [2024-11-06 14:20:47.326213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:19.842 [2024-11-06 14:20:47.326224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:19.842 [2024-11-06 14:20:47.326230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:19.842 ===================================================== 00:16:19.842 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:19.842 ===================================================== 00:16:19.842 Controller Capabilities/Features 00:16:19.842 ================================ 00:16:19.842 Vendor ID: 4e58 00:16:19.842 Subsystem Vendor ID: 4e58 00:16:19.842 Serial Number: SPDK1 00:16:19.842 Model Number: SPDK bdev Controller 00:16:19.842 Firmware Version: 25.01 00:16:19.842 Recommended Arb Burst: 6 00:16:19.842 IEEE OUI Identifier: 8d 6b 50 00:16:19.842 Multi-path I/O 00:16:19.842 May have multiple subsystem ports: Yes 00:16:19.842 May have multiple controllers: Yes 00:16:19.842 Associated with SR-IOV VF: No 00:16:19.842 Max Data Transfer Size: 131072 00:16:19.842 Max Number of Namespaces: 32 00:16:19.842 Max Number of I/O Queues: 127 00:16:19.842 NVMe Specification Version (VS): 1.3 00:16:19.842 NVMe Specification Version (Identify): 1.3 00:16:19.842 Maximum Queue Entries: 256 00:16:19.842 Contiguous Queues Required: Yes 00:16:19.842 Arbitration Mechanisms Supported 00:16:19.842 Weighted Round Robin: Not Supported 00:16:19.842 Vendor Specific: Not Supported 00:16:19.842 Reset Timeout: 15000 ms 00:16:19.842 Doorbell Stride: 4 bytes 00:16:19.842 NVM Subsystem Reset: Not Supported 00:16:19.842 Command Sets Supported 00:16:19.842 NVM Command Set: Supported 00:16:19.842 Boot Partition: Not Supported 00:16:19.842 Memory Page Size Minimum: 4096 bytes 00:16:19.842 Memory Page Size Maximum: 4096 bytes 00:16:19.842 Persistent Memory Region: Not Supported 00:16:19.842 Optional Asynchronous Events Supported 00:16:19.842 Namespace Attribute Notices: Supported 00:16:19.842 Firmware Activation Notices: Not Supported 00:16:19.842 ANA Change Notices: Not Supported 00:16:19.842 PLE Aggregate Log Change Notices: Not Supported 00:16:19.842 LBA Status Info Alert Notices: Not Supported 00:16:19.842 EGE Aggregate Log Change Notices: Not Supported 00:16:19.842 Normal NVM Subsystem Shutdown event: Not Supported 00:16:19.842 Zone Descriptor Change Notices: Not Supported 00:16:19.842 Discovery Log Change Notices: Not Supported 00:16:19.842 Controller Attributes 00:16:19.842 128-bit Host Identifier: Supported 00:16:19.842 Non-Operational Permissive Mode: Not Supported 00:16:19.842 NVM Sets: Not Supported 00:16:19.842 Read Recovery Levels: Not Supported 00:16:19.842 Endurance Groups: Not Supported 00:16:19.842 Predictable Latency Mode: Not Supported 00:16:19.842 Traffic Based Keep ALive: Not Supported 00:16:19.842 Namespace Granularity: Not Supported 00:16:19.842 SQ Associations: Not Supported 00:16:19.842 UUID List: Not Supported 00:16:19.842 Multi-Domain Subsystem: Not Supported 00:16:19.842 Fixed Capacity Management: Not Supported 00:16:19.842 Variable Capacity Management: Not Supported 00:16:19.842 Delete Endurance Group: Not Supported 00:16:19.842 Delete NVM Set: Not Supported 00:16:19.842 Extended LBA Formats Supported: Not Supported 00:16:19.842 Flexible Data Placement Supported: Not Supported 00:16:19.842 00:16:19.842 Controller Memory Buffer Support 00:16:19.842 ================================ 00:16:19.842 Supported: No 00:16:19.842 00:16:19.842 Persistent Memory Region Support 00:16:19.842 ================================ 00:16:19.842 Supported: No 00:16:19.842 00:16:19.842 Admin Command Set Attributes 00:16:19.842 ============================ 00:16:19.842 Security Send/Receive: Not Supported 00:16:19.842 Format NVM: Not Supported 00:16:19.842 Firmware Activate/Download: Not Supported 00:16:19.842 Namespace Management: Not Supported 00:16:19.843 Device Self-Test: Not Supported 00:16:19.843 Directives: Not Supported 00:16:19.843 NVMe-MI: Not Supported 00:16:19.843 Virtualization Management: Not Supported 00:16:19.843 Doorbell Buffer Config: Not Supported 00:16:19.843 Get LBA Status Capability: Not Supported 00:16:19.843 Command & Feature Lockdown Capability: Not Supported 00:16:19.843 Abort Command Limit: 4 00:16:19.843 Async Event Request Limit: 4 00:16:19.843 Number of Firmware Slots: N/A 00:16:19.843 Firmware Slot 1 Read-Only: N/A 00:16:19.843 Firmware Activation Without Reset: N/A 00:16:19.843 Multiple Update Detection Support: N/A 00:16:19.843 Firmware Update Granularity: No Information Provided 00:16:19.843 Per-Namespace SMART Log: No 00:16:19.843 Asymmetric Namespace Access Log Page: Not Supported 00:16:19.843 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:16:19.843 Command Effects Log Page: Supported 00:16:19.843 Get Log Page Extended Data: Supported 00:16:19.843 Telemetry Log Pages: Not Supported 00:16:19.843 Persistent Event Log Pages: Not Supported 00:16:19.843 Supported Log Pages Log Page: May Support 00:16:19.843 Commands Supported & Effects Log Page: Not Supported 00:16:19.843 Feature Identifiers & Effects Log Page:May Support 00:16:19.843 NVMe-MI Commands & Effects Log Page: May Support 00:16:19.843 Data Area 4 for Telemetry Log: Not Supported 00:16:19.843 Error Log Page Entries Supported: 128 00:16:19.843 Keep Alive: Supported 00:16:19.843 Keep Alive Granularity: 10000 ms 00:16:19.843 00:16:19.843 NVM Command Set Attributes 00:16:19.843 ========================== 00:16:19.843 Submission Queue Entry Size 00:16:19.843 Max: 64 00:16:19.843 Min: 64 00:16:19.843 Completion Queue Entry Size 00:16:19.843 Max: 16 00:16:19.843 Min: 16 00:16:19.843 Number of Namespaces: 32 00:16:19.843 Compare Command: Supported 00:16:19.843 Write Uncorrectable Command: Not Supported 00:16:19.843 Dataset Management Command: Supported 00:16:19.843 Write Zeroes Command: Supported 00:16:19.843 Set Features Save Field: Not Supported 00:16:19.843 Reservations: Not Supported 00:16:19.843 Timestamp: Not Supported 00:16:19.843 Copy: Supported 00:16:19.843 Volatile Write Cache: Present 00:16:19.843 Atomic Write Unit (Normal): 1 00:16:19.843 Atomic Write Unit (PFail): 1 00:16:19.843 Atomic Compare & Write Unit: 1 00:16:19.843 Fused Compare & Write: Supported 00:16:19.843 Scatter-Gather List 00:16:19.843 SGL Command Set: Supported (Dword aligned) 00:16:19.843 SGL Keyed: Not Supported 00:16:19.843 SGL Bit Bucket Descriptor: Not Supported 00:16:19.843 SGL Metadata Pointer: Not Supported 00:16:19.843 Oversized SGL: Not Supported 00:16:19.843 SGL Metadata Address: Not Supported 00:16:19.843 SGL Offset: Not Supported 00:16:19.843 Transport SGL Data Block: Not Supported 00:16:19.843 Replay Protected Memory Block: Not Supported 00:16:19.843 00:16:19.843 Firmware Slot Information 00:16:19.843 ========================= 00:16:19.843 Active slot: 1 00:16:19.843 Slot 1 Firmware Revision: 25.01 00:16:19.843 00:16:19.843 00:16:19.843 Commands Supported and Effects 00:16:19.843 ============================== 00:16:19.843 Admin Commands 00:16:19.843 -------------- 00:16:19.843 Get Log Page (02h): Supported 00:16:19.843 Identify (06h): Supported 00:16:19.843 Abort (08h): Supported 00:16:19.843 Set Features (09h): Supported 00:16:19.843 Get Features (0Ah): Supported 00:16:19.843 Asynchronous Event Request (0Ch): Supported 00:16:19.843 Keep Alive (18h): Supported 00:16:19.843 I/O Commands 00:16:19.843 ------------ 00:16:19.843 Flush (00h): Supported LBA-Change 00:16:19.843 Write (01h): Supported LBA-Change 00:16:19.843 Read (02h): Supported 00:16:19.843 Compare (05h): Supported 00:16:19.843 Write Zeroes (08h): Supported LBA-Change 00:16:19.843 Dataset Management (09h): Supported LBA-Change 00:16:19.843 Copy (19h): Supported LBA-Change 00:16:19.843 00:16:19.843 Error Log 00:16:19.843 ========= 00:16:19.843 00:16:19.843 Arbitration 00:16:19.843 =========== 00:16:19.843 Arbitration Burst: 1 00:16:19.843 00:16:19.843 Power Management 00:16:19.843 ================ 00:16:19.843 Number of Power States: 1 00:16:19.843 Current Power State: Power State #0 00:16:19.843 Power State #0: 00:16:19.843 Max Power: 0.00 W 00:16:19.843 Non-Operational State: Operational 00:16:19.843 Entry Latency: Not Reported 00:16:19.843 Exit Latency: Not Reported 00:16:19.843 Relative Read Throughput: 0 00:16:19.843 Relative Read Latency: 0 00:16:19.843 Relative Write Throughput: 0 00:16:19.843 Relative Write Latency: 0 00:16:19.843 Idle Power: Not Reported 00:16:19.843 Active Power: Not Reported 00:16:19.843 Non-Operational Permissive Mode: Not Supported 00:16:19.843 00:16:19.843 Health Information 00:16:19.843 ================== 00:16:19.843 Critical Warnings: 00:16:19.843 Available Spare Space: OK 00:16:19.843 Temperature: OK 00:16:19.843 Device Reliability: OK 00:16:19.843 Read Only: No 00:16:19.843 Volatile Memory Backup: OK 00:16:19.843 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:19.843 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:19.843 Available Spare: 0% 00:16:19.843 Available Sp[2024-11-06 14:20:47.326351] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:19.843 [2024-11-06 14:20:47.326363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:19.843 [2024-11-06 14:20:47.326409] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:16:19.843 [2024-11-06 14:20:47.326420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.843 [2024-11-06 14:20:47.326428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.843 [2024-11-06 14:20:47.326434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.843 [2024-11-06 14:20:47.326444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.843 [2024-11-06 14:20:47.329964] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:16:19.843 [2024-11-06 14:20:47.329997] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:16:19.843 [2024-11-06 14:20:47.330230] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:19.843 [2024-11-06 14:20:47.330291] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:16:19.843 [2024-11-06 14:20:47.330299] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:16:19.843 [2024-11-06 14:20:47.331225] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:16:19.843 [2024-11-06 14:20:47.331247] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:16:19.843 [2024-11-06 14:20:47.331698] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:16:19.843 [2024-11-06 14:20:47.334965] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:19.843 are Threshold: 0% 00:16:19.843 Life Percentage Used: 0% 00:16:19.843 Data Units Read: 0 00:16:19.843 Data Units Written: 0 00:16:19.843 Host Read Commands: 0 00:16:19.843 Host Write Commands: 0 00:16:19.843 Controller Busy Time: 0 minutes 00:16:19.843 Power Cycles: 0 00:16:19.843 Power On Hours: 0 hours 00:16:19.843 Unsafe Shutdowns: 0 00:16:19.843 Unrecoverable Media Errors: 0 00:16:19.843 Lifetime Error Log Entries: 0 00:16:19.843 Warning Temperature Time: 0 minutes 00:16:19.843 Critical Temperature Time: 0 minutes 00:16:19.843 00:16:19.843 Number of Queues 00:16:19.843 ================ 00:16:19.843 Number of I/O Submission Queues: 127 00:16:19.843 Number of I/O Completion Queues: 127 00:16:19.843 00:16:19.843 Active Namespaces 00:16:19.843 ================= 00:16:19.843 Namespace ID:1 00:16:19.843 Error Recovery Timeout: Unlimited 00:16:19.843 Command Set Identifier: NVM (00h) 00:16:19.843 Deallocate: Supported 00:16:19.843 Deallocated/Unwritten Error: Not Supported 00:16:19.843 Deallocated Read Value: Unknown 00:16:19.843 Deallocate in Write Zeroes: Not Supported 00:16:19.843 Deallocated Guard Field: 0xFFFF 00:16:19.843 Flush: Supported 00:16:19.843 Reservation: Supported 00:16:19.843 Namespace Sharing Capabilities: Multiple Controllers 00:16:19.843 Size (in LBAs): 131072 (0GiB) 00:16:19.843 Capacity (in LBAs): 131072 (0GiB) 00:16:19.843 Utilization (in LBAs): 131072 (0GiB) 00:16:19.843 NGUID: F77A7E7EE7FB4770A8500ED8555A12D4 00:16:19.843 UUID: f77a7e7e-e7fb-4770-a850-0ed8555a12d4 00:16:19.843 Thin Provisioning: Not Supported 00:16:19.843 Per-NS Atomic Units: Yes 00:16:19.843 Atomic Boundary Size (Normal): 0 00:16:19.843 Atomic Boundary Size (PFail): 0 00:16:19.843 Atomic Boundary Offset: 0 00:16:19.843 Maximum Single Source Range Length: 65535 00:16:19.843 Maximum Copy Length: 65535 00:16:19.843 Maximum Source Range Count: 1 00:16:19.844 NGUID/EUI64 Never Reused: No 00:16:19.844 Namespace Write Protected: No 00:16:19.844 Number of LBA Formats: 1 00:16:19.844 Current LBA Format: LBA Format #00 00:16:19.844 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:19.844 00:16:19.844 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:20.102 [2024-11-06 14:20:47.684531] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:25.366 Initializing NVMe Controllers 00:16:25.366 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:25.366 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:16:25.366 Initialization complete. Launching workers. 00:16:25.366 ======================================================== 00:16:25.366 Latency(us) 00:16:25.366 Device Information : IOPS MiB/s Average min max 00:16:25.366 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 37502.24 146.49 3412.62 972.68 8132.30 00:16:25.366 ======================================================== 00:16:25.366 Total : 37502.24 146.49 3412.62 972.68 8132.30 00:16:25.366 00:16:25.366 [2024-11-06 14:20:52.692831] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:25.366 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:25.623 [2024-11-06 14:20:53.057641] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:30.884 Initializing NVMe Controllers 00:16:30.884 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:30.884 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:16:30.884 Initialization complete. Launching workers. 00:16:30.884 ======================================================== 00:16:30.884 Latency(us) 00:16:30.884 Device Information : IOPS MiB/s Average min max 00:16:30.884 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16025.60 62.60 7993.04 4988.82 9998.45 00:16:30.884 ======================================================== 00:16:30.884 Total : 16025.60 62.60 7993.04 4988.82 9998.45 00:16:30.884 00:16:30.884 [2024-11-06 14:20:58.080168] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:30.884 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /home/vagrant/spdk_repo/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:30.884 [2024-11-06 14:20:58.427397] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:36.180 [2024-11-06 14:21:03.484632] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:36.180 Initializing NVMe Controllers 00:16:36.180 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:36.180 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:36.180 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:16:36.180 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:16:36.180 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:16:36.180 Initialization complete. Launching workers. 00:16:36.180 Starting thread on core 2 00:16:36.180 Starting thread on core 3 00:16:36.180 Starting thread on core 1 00:16:36.180 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:16:36.437 [2024-11-06 14:21:03.858524] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:39.717 [2024-11-06 14:21:06.978540] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:39.717 Initializing NVMe Controllers 00:16:39.717 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:39.717 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:39.717 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:16:39.717 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:16:39.717 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:16:39.717 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:16:39.717 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:16:39.717 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:39.717 Initialization complete. Launching workers. 00:16:39.717 Starting thread on core 1 with urgent priority queue 00:16:39.717 Starting thread on core 2 with urgent priority queue 00:16:39.717 Starting thread on core 3 with urgent priority queue 00:16:39.717 Starting thread on core 0 with urgent priority queue 00:16:39.717 SPDK bdev Controller (SPDK1 ) core 0: 1045.33 IO/s 95.66 secs/100000 ios 00:16:39.717 SPDK bdev Controller (SPDK1 ) core 1: 1322.67 IO/s 75.60 secs/100000 ios 00:16:39.717 SPDK bdev Controller (SPDK1 ) core 2: 1088.00 IO/s 91.91 secs/100000 ios 00:16:39.717 SPDK bdev Controller (SPDK1 ) core 3: 1152.00 IO/s 86.81 secs/100000 ios 00:16:39.717 ======================================================== 00:16:39.717 00:16:39.717 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:39.717 [2024-11-06 14:21:07.344445] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:39.974 Initializing NVMe Controllers 00:16:39.974 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:39.974 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:39.974 Namespace ID: 1 size: 0GB 00:16:39.974 Initialization complete. 00:16:39.974 INFO: using host memory buffer for IO 00:16:39.974 Hello world! 00:16:39.974 [2024-11-06 14:21:07.380328] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:39.974 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:40.231 [2024-11-06 14:21:07.737459] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:41.166 Initializing NVMe Controllers 00:16:41.166 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:41.166 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:41.166 Initialization complete. Launching workers. 00:16:41.166 submit (in ns) avg, min, max = 6616.3, 3053.8, 4020224.6 00:16:41.166 complete (in ns) avg, min, max = 22958.4, 1757.7, 5015480.0 00:16:41.166 00:16:41.166 Submit histogram 00:16:41.166 ================ 00:16:41.166 Range in us Cumulative Count 00:16:41.166 3.052 - 3.065: 0.0069% ( 1) 00:16:41.166 3.077 - 3.089: 0.0137% ( 1) 00:16:41.166 3.089 - 3.102: 0.0344% ( 3) 00:16:41.166 3.102 - 3.114: 0.0481% ( 2) 00:16:41.166 3.114 - 3.126: 0.0687% ( 3) 00:16:41.167 3.126 - 3.138: 0.1993% ( 19) 00:16:41.167 3.138 - 3.151: 0.4536% ( 37) 00:16:41.167 3.151 - 3.175: 1.5395% ( 158) 00:16:41.167 3.175 - 3.200: 3.3402% ( 262) 00:16:41.167 3.200 - 3.225: 7.7732% ( 645) 00:16:41.167 3.225 - 3.249: 17.4433% ( 1407) 00:16:41.167 3.249 - 3.274: 30.8247% ( 1947) 00:16:41.167 3.274 - 3.298: 44.6048% ( 2005) 00:16:41.167 3.298 - 3.323: 58.4811% ( 2019) 00:16:41.167 3.323 - 3.348: 71.7113% ( 1925) 00:16:41.167 3.348 - 3.372: 80.9966% ( 1351) 00:16:41.167 3.372 - 3.397: 86.6460% ( 822) 00:16:41.167 3.397 - 3.422: 90.3162% ( 534) 00:16:41.167 3.422 - 3.446: 92.5704% ( 328) 00:16:41.167 3.446 - 3.471: 94.2268% ( 241) 00:16:41.167 3.471 - 3.495: 95.1478% ( 134) 00:16:41.167 3.495 - 3.520: 95.7869% ( 93) 00:16:41.167 3.520 - 3.545: 96.1787% ( 57) 00:16:41.167 3.545 - 3.569: 96.4124% ( 34) 00:16:41.167 3.569 - 3.594: 96.6254% ( 31) 00:16:41.167 3.594 - 3.618: 96.7973% ( 25) 00:16:41.167 3.618 - 3.643: 96.8935% ( 14) 00:16:41.167 3.643 - 3.668: 96.9897% ( 14) 00:16:41.167 3.668 - 3.692: 97.0859% ( 14) 00:16:41.167 3.692 - 3.717: 97.1684% ( 12) 00:16:41.167 3.717 - 3.742: 97.2577% ( 13) 00:16:41.167 3.742 - 3.766: 97.3540% ( 14) 00:16:41.167 3.766 - 3.791: 97.4433% ( 13) 00:16:41.167 3.791 - 3.815: 97.4914% ( 7) 00:16:41.167 3.815 - 3.840: 97.5464% ( 8) 00:16:41.167 3.840 - 3.865: 97.6014% ( 8) 00:16:41.167 3.865 - 3.889: 97.6838% ( 12) 00:16:41.167 3.889 - 3.914: 97.7182% ( 5) 00:16:41.167 3.914 - 3.938: 97.7457% ( 4) 00:16:41.167 3.938 - 3.963: 97.7938% ( 7) 00:16:41.167 3.963 - 3.988: 97.8351% ( 6) 00:16:41.167 3.988 - 4.012: 97.8763% ( 6) 00:16:41.167 4.012 - 4.037: 97.9175% ( 6) 00:16:41.167 4.037 - 4.062: 97.9519% ( 5) 00:16:41.167 4.062 - 4.086: 97.9863% ( 5) 00:16:41.167 4.086 - 4.111: 98.0137% ( 4) 00:16:41.167 4.111 - 4.135: 98.0412% ( 4) 00:16:41.167 4.135 - 4.160: 98.0619% ( 3) 00:16:41.167 4.160 - 4.185: 98.0893% ( 4) 00:16:41.167 4.185 - 4.209: 98.1031% ( 2) 00:16:41.167 4.209 - 4.234: 98.1168% ( 2) 00:16:41.167 4.234 - 4.258: 98.1306% ( 2) 00:16:41.167 4.258 - 4.283: 98.1512% ( 3) 00:16:41.167 4.283 - 4.308: 98.1718% ( 3) 00:16:41.167 4.382 - 4.406: 98.1787% ( 1) 00:16:41.167 4.431 - 4.455: 98.1924% ( 2) 00:16:41.167 4.455 - 4.480: 98.2062% ( 2) 00:16:41.167 4.505 - 4.529: 98.2131% ( 1) 00:16:41.167 4.529 - 4.554: 98.2199% ( 1) 00:16:41.167 4.554 - 4.578: 98.2337% ( 2) 00:16:41.167 4.578 - 4.603: 98.2818% ( 7) 00:16:41.167 4.603 - 4.628: 98.3162% ( 5) 00:16:41.167 4.628 - 4.652: 98.3574% ( 6) 00:16:41.167 4.652 - 4.677: 98.3986% ( 6) 00:16:41.167 4.677 - 4.702: 98.4467% ( 7) 00:16:41.167 4.702 - 4.726: 98.5017% ( 8) 00:16:41.167 4.726 - 4.751: 98.5361% ( 5) 00:16:41.167 4.751 - 4.775: 98.5842% ( 7) 00:16:41.167 4.775 - 4.800: 98.6323% ( 7) 00:16:41.167 4.800 - 4.825: 98.6529% ( 3) 00:16:41.167 4.825 - 4.849: 98.6667% ( 2) 00:16:41.167 4.849 - 4.874: 98.6942% ( 4) 00:16:41.167 4.874 - 4.898: 98.7079% ( 2) 00:16:41.167 4.898 - 4.923: 98.7423% ( 5) 00:16:41.167 4.923 - 4.948: 98.7973% ( 8) 00:16:41.167 4.948 - 4.972: 98.8179% ( 3) 00:16:41.167 4.972 - 4.997: 98.8385% ( 3) 00:16:41.167 4.997 - 5.022: 98.8591% ( 3) 00:16:41.167 5.022 - 5.046: 98.8660% ( 1) 00:16:41.167 5.046 - 5.071: 98.8797% ( 2) 00:16:41.167 5.071 - 5.095: 98.9003% ( 3) 00:16:41.167 5.095 - 5.120: 98.9141% ( 2) 00:16:41.167 5.120 - 5.145: 98.9278% ( 2) 00:16:41.167 5.145 - 5.169: 98.9347% ( 1) 00:16:41.167 5.169 - 5.194: 98.9416% ( 1) 00:16:41.167 5.218 - 5.243: 98.9622% ( 3) 00:16:41.167 5.243 - 5.268: 98.9691% ( 1) 00:16:41.167 5.342 - 5.366: 98.9897% ( 3) 00:16:41.167 5.391 - 5.415: 98.9966% ( 1) 00:16:41.167 5.465 - 5.489: 99.0034% ( 1) 00:16:41.167 5.514 - 5.538: 99.0103% ( 1) 00:16:41.167 5.538 - 5.563: 99.0172% ( 1) 00:16:41.167 5.563 - 5.588: 99.0241% ( 1) 00:16:41.167 5.612 - 5.637: 99.0378% ( 2) 00:16:41.167 5.637 - 5.662: 99.0447% ( 1) 00:16:41.167 5.711 - 5.735: 99.0584% ( 2) 00:16:41.167 5.735 - 5.760: 99.0790% ( 3) 00:16:41.167 5.760 - 5.785: 99.0928% ( 2) 00:16:41.167 5.785 - 5.809: 99.1134% ( 3) 00:16:41.167 5.809 - 5.834: 99.1203% ( 1) 00:16:41.167 5.834 - 5.858: 99.1271% ( 1) 00:16:41.167 5.858 - 5.883: 99.1340% ( 1) 00:16:41.167 5.908 - 5.932: 99.1478% ( 2) 00:16:41.167 5.932 - 5.957: 99.1615% ( 2) 00:16:41.167 6.006 - 6.031: 99.1684% ( 1) 00:16:41.167 6.055 - 6.080: 99.1821% ( 2) 00:16:41.167 6.080 - 6.105: 99.1959% ( 2) 00:16:41.167 6.105 - 6.129: 99.2027% ( 1) 00:16:41.167 6.129 - 6.154: 99.2096% ( 1) 00:16:41.167 6.154 - 6.178: 99.2234% ( 2) 00:16:41.167 6.178 - 6.203: 99.2302% ( 1) 00:16:41.167 6.203 - 6.228: 99.2371% ( 1) 00:16:41.167 6.228 - 6.252: 99.2509% ( 2) 00:16:41.167 6.252 - 6.277: 99.2577% ( 1) 00:16:41.167 6.277 - 6.302: 99.2646% ( 1) 00:16:41.167 6.302 - 6.351: 99.2921% ( 4) 00:16:41.167 6.351 - 6.400: 99.2990% ( 1) 00:16:41.167 6.449 - 6.498: 99.3265% ( 4) 00:16:41.167 6.548 - 6.597: 99.3402% ( 2) 00:16:41.167 6.597 - 6.646: 99.3471% ( 1) 00:16:41.167 6.745 - 6.794: 99.3608% ( 2) 00:16:41.167 6.794 - 6.843: 99.3746% ( 2) 00:16:41.167 6.843 - 6.892: 99.3814% ( 1) 00:16:41.167 6.892 - 6.942: 99.3883% ( 1) 00:16:41.167 6.942 - 6.991: 99.3952% ( 1) 00:16:41.167 6.991 - 7.040: 99.4089% ( 2) 00:16:41.167 7.089 - 7.138: 99.4227% ( 2) 00:16:41.167 7.138 - 7.188: 99.4296% ( 1) 00:16:41.167 7.188 - 7.237: 99.4364% ( 1) 00:16:41.167 7.286 - 7.335: 99.4433% ( 1) 00:16:41.167 7.335 - 7.385: 99.4502% ( 1) 00:16:41.167 7.828 - 7.877: 99.4570% ( 1) 00:16:41.167 7.926 - 7.975: 99.4639% ( 1) 00:16:41.167 7.975 - 8.025: 99.4708% ( 1) 00:16:41.167 8.025 - 8.074: 99.4777% ( 1) 00:16:41.167 8.074 - 8.123: 99.4845% ( 1) 00:16:41.167 8.172 - 8.222: 99.4914% ( 1) 00:16:41.167 8.222 - 8.271: 99.4983% ( 1) 00:16:41.167 8.271 - 8.320: 99.5120% ( 2) 00:16:41.167 8.369 - 8.418: 99.5326% ( 3) 00:16:41.167 8.517 - 8.566: 99.5464% ( 2) 00:16:41.167 8.566 - 8.615: 99.5533% ( 1) 00:16:41.167 8.665 - 8.714: 99.5739% ( 3) 00:16:41.167 8.714 - 8.763: 99.5876% ( 2) 00:16:41.167 8.763 - 8.812: 99.6151% ( 4) 00:16:41.167 8.812 - 8.862: 99.6289% ( 2) 00:16:41.167 8.911 - 8.960: 99.6426% ( 2) 00:16:41.167 8.960 - 9.009: 99.6632% ( 3) 00:16:41.167 9.058 - 9.108: 99.6701% ( 1) 00:16:41.167 9.157 - 9.206: 99.6770% ( 1) 00:16:41.167 9.305 - 9.354: 99.6907% ( 2) 00:16:41.167 9.403 - 9.452: 99.6976% ( 1) 00:16:41.167 9.649 - 9.698: 99.7045% ( 1) 00:16:41.167 9.698 - 9.748: 99.7182% ( 2) 00:16:41.167 10.043 - 10.092: 99.7251% ( 1) 00:16:41.167 10.191 - 10.240: 99.7320% ( 1) 00:16:41.167 11.914 - 11.963: 99.7388% ( 1) 00:16:41.167 12.012 - 12.062: 99.7457% ( 1) 00:16:41.167 12.258 - 12.308: 99.7526% ( 1) 00:16:41.167 12.308 - 12.357: 99.7595% ( 1) 00:16:41.167 12.455 - 12.505: 99.7663% ( 1) 00:16:41.167 12.554 - 12.603: 99.7732% ( 1) 00:16:41.167 12.603 - 12.702: 99.7869% ( 2) 00:16:41.167 12.702 - 12.800: 99.7938% ( 1) 00:16:41.167 12.800 - 12.898: 99.8007% ( 1) 00:16:41.167 12.898 - 12.997: 99.8076% ( 1) 00:16:41.167 12.997 - 13.095: 99.8144% ( 1) 00:16:41.167 13.194 - 13.292: 99.8282% ( 2) 00:16:41.167 13.588 - 13.686: 99.8351% ( 1) 00:16:41.167 13.686 - 13.785: 99.8625% ( 4) 00:16:41.167 13.785 - 13.883: 99.8694% ( 1) 00:16:41.167 13.982 - 14.080: 99.8832% ( 2) 00:16:41.167 14.769 - 14.868: 99.8900% ( 1) 00:16:41.167 15.163 - 15.262: 99.8969% ( 1) 00:16:41.167 15.852 - 15.951: 99.9038% ( 1) 00:16:41.167 16.148 - 16.246: 99.9107% ( 1) 00:16:41.167 20.185 - 20.283: 99.9175% ( 1) 00:16:41.167 3012.135 - 3024.738: 99.9244% ( 1) 00:16:41.167 4007.778 - 4032.985: 100.0000% ( 11) 00:16:41.167 00:16:41.167 Complete histogram 00:16:41.167 ================== 00:16:41.167 Range in us Cumulative Count 00:16:41.167 1.748 - 1.760: 0.0069% ( 1) 00:16:41.167 1.760 - 1.772: 0.0137% ( 1) 00:16:41.167 1.785 - 1.797: 0.0206% ( 1) 00:16:41.167 1.797 - 1.809: 0.3093% ( 42) 00:16:41.167 1.809 - 1.822: 0.6667% ( 52) 00:16:41.168 1.822 - 1.834: 1.1203% ( 66) 00:16:41.168 1.834 - 1.846: 1.3608% ( 35) 00:16:41.168 1.846 - 1.858: 2.7835% ( 207) 00:16:41.168 1.858 - 1.871: 7.6838% ( 713) 00:16:41.168 1.871 - 1.883: 11.5326% ( 560) 00:16:41.168 1.883 - 1.895: 15.7045% ( 607) 00:16:41.168 1.895 - 1.908: 18.4330% ( 397) 00:16:41.168 1.908 - 1.920: 30.4948% ( 1755) 00:16:41.168 1.920 - 1.932: 48.1787% ( 2573) 00:16:41.168 1.932 - 1.945: 62.4536% ( 2077) 00:16:41.168 1.945 - 1.957: 76.1031% ( 1986) 00:16:41.168 1.957 - 1.969: 83.3952% ( 1061) 00:16:41.168 1.969 - 1.982: 87.5670% ( 607) 00:16:41.168 1.982 - 1.994: 90.6117% ( 443) 00:16:41.168 1.994 - 2.006: 92.8247% ( 322) 00:16:41.168 2.006 - 2.018: 94.8385% ( 293) 00:16:41.168 2.018 - 2.031: 96.1375% ( 189) 00:16:41.168 2.031 - 2.043: 96.9966% ( 125) 00:16:41.168 2.043 - 2.055: 97.6082% ( 89) 00:16:41.168 2.055 - 2.068: 97.9794% ( 54) 00:16:41.168 2.068 - 2.080: 98.1306% ( 22) 00:16:41.168 2.080 - 2.092: 98.2818% ( 22) 00:16:41.168 2.092 - 2.105: 98.4399% ( 23) 00:16:41.168 2.105 - 2.117: 98.5292% ( 13) 00:16:41.168 2.117 - 2.129: 98.5911% ( 9) 00:16:41.168 2.129 - 2.142: 98.6460% ( 8) 00:16:41.168 2.142 - 2.154: 98.6804% ( 5) 00:16:41.168 2.154 - 2.166: 98.7423% ( 9) 00:16:41.168 2.166 - 2.178: 98.7766% ( 5) 00:16:41.168 2.178 - 2.191: 98.8110% ( 5) 00:16:41.168 2.203 - 2.215: 98.8247% ( 2) 00:16:41.168 2.215 - 2.228: 98.8591% ( 5) 00:16:41.168 2.228 - 2.240: 98.8660% ( 1) 00:16:41.168 2.240 - 2.252: 98.8729% ( 1) 00:16:41.168 2.252 - 2.265: 98.8935% ( 3) 00:16:41.168 2.265 - 2.277: 98.9003% ( 1) 00:16:41.168 2.277 - 2.289: 98.9072% ( 1) 00:16:41.168 2.289 - 2.302: 98.9141% ( 1) 00:16:41.168 2.302 - 2.314: 98.9278% ( 2) 00:16:41.168 2.314 - 2.326: 98.9347% ( 1) 00:16:41.168 2.338 - 2.351: 98.9416% ( 1) 00:16:41.168 2.363 - 2.375: 98.9553% ( 2) 00:16:41.168 2.375 - 2.388: 98.9691% ( 2) 00:16:41.168 2.388 - 2.400: 98.9897% ( 3) 00:16:41.168 2.400 - 2.412: 99.0103% ( 3) 00:16:41.168 2.412 - 2.425: 99.0241% ( 2) 00:16:41.168 2.425 - 2.437: 99.0309% ( 1) 00:16:41.168 2.449 - 2.462: 99.0378% ( 1) 00:16:41.168 2.462 - 2.474: 99.0447% ( 1) 00:16:41.168 2.498 - 2.511: 99.0584% ( 2) 00:16:41.168 2.511 - 2.523: 99.0653% ( 1) 00:16:41.168 2.535 - 2.548: 99.0722% ( 1) 00:16:41.168 2.548 - 2.560: 99.0790% ( 1) 00:16:41.168 2.609 - 2.622: 99.0859% ( 1) 00:16:41.168 2.634 - 2.646: 99.0928% ( 1) 00:16:41.168 2.646 - 2.658: 99.0997% ( 1) 00:16:41.168 2.671 - 2.683: 99.1134% ( 2) 00:16:41.168 2.720 - 2.732: 99.1203% ( 1) 00:16:41.168 2.782 - 2.794: 99.1340% ( 2) 00:16:41.168 2.855 - 2.868: 99.1409% ( 1) 00:16:41.168 2.917 - 2.929: 99.1478% ( 1) 00:16:41.168 2.954 - 2.966: 99.1546% ( 1) 00:16:41.168 3.028 - 3.040: 99.1615% ( 1) 00:16:41.168 3.052 - 3.065: 99.1684% ( 1) 00:16:41.168 3.089 - 3.102: 99.1753% ( 1) 00:16:41.168 3.114 - 3.126: 99.1959% ( 3) 00:16:41.168 3.126 - 3.138: 99.2027% ( 1) 00:16:41.168 3.151 - 3.175: 9[2024-11-06 14:21:08.754362] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:41.426 9.2096% ( 1) 00:16:41.426 3.175 - 3.200: 99.2234% ( 2) 00:16:41.426 3.200 - 3.225: 99.2371% ( 2) 00:16:41.426 3.225 - 3.249: 99.2440% ( 1) 00:16:41.426 3.298 - 3.323: 99.2509% ( 1) 00:16:41.426 3.545 - 3.569: 99.2577% ( 1) 00:16:41.426 4.332 - 4.357: 99.2646% ( 1) 00:16:41.426 4.431 - 4.455: 99.2715% ( 1) 00:16:41.426 6.695 - 6.745: 99.2784% ( 1) 00:16:41.426 6.843 - 6.892: 99.2921% ( 2) 00:16:41.426 6.942 - 6.991: 99.2990% ( 1) 00:16:41.426 7.040 - 7.089: 99.3058% ( 1) 00:16:41.426 7.286 - 7.335: 99.3127% ( 1) 00:16:41.426 7.385 - 7.434: 99.3196% ( 1) 00:16:41.426 7.483 - 7.532: 99.3265% ( 1) 00:16:41.426 7.828 - 7.877: 99.3333% ( 1) 00:16:41.426 7.877 - 7.926: 99.3402% ( 1) 00:16:41.426 7.926 - 7.975: 99.3540% ( 2) 00:16:41.426 8.271 - 8.320: 99.3608% ( 1) 00:16:41.426 10.634 - 10.683: 99.3677% ( 1) 00:16:41.426 10.732 - 10.782: 99.3746% ( 1) 00:16:41.426 10.880 - 10.929: 99.3814% ( 1) 00:16:41.426 11.175 - 11.225: 99.3883% ( 1) 00:16:41.426 11.225 - 11.274: 99.3952% ( 1) 00:16:41.426 11.372 - 11.422: 99.4021% ( 1) 00:16:41.426 11.520 - 11.569: 99.4089% ( 1) 00:16:41.426 11.717 - 11.766: 99.4158% ( 1) 00:16:41.426 11.815 - 11.865: 99.4296% ( 2) 00:16:41.426 12.455 - 12.505: 99.4364% ( 1) 00:16:41.426 13.489 - 13.588: 99.4433% ( 1) 00:16:41.426 16.443 - 16.542: 99.4502% ( 1) 00:16:41.426 17.329 - 17.428: 99.4570% ( 1) 00:16:41.426 1008.246 - 1014.548: 99.4708% ( 2) 00:16:41.426 2003.889 - 2016.492: 99.4777% ( 1) 00:16:41.426 2016.492 - 2029.095: 99.4845% ( 1) 00:16:41.426 3012.135 - 3024.738: 99.4983% ( 2) 00:16:41.426 3957.366 - 3982.572: 99.5052% ( 1) 00:16:41.426 4007.778 - 4032.985: 99.9931% ( 71) 00:16:41.426 4990.818 - 5016.025: 100.0000% ( 1) 00:16:41.426 00:16:41.426 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:16:41.426 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:41.426 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:16:41.426 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:16:41.426 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:41.426 [ 00:16:41.426 { 00:16:41.426 "allow_any_host": true, 00:16:41.426 "hosts": [], 00:16:41.426 "listen_addresses": [], 00:16:41.426 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:41.426 "subtype": "Discovery" 00:16:41.426 }, 00:16:41.426 { 00:16:41.426 "allow_any_host": true, 00:16:41.426 "hosts": [], 00:16:41.426 "listen_addresses": [ 00:16:41.426 { 00:16:41.426 "adrfam": "IPv4", 00:16:41.426 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:41.426 "trsvcid": "0", 00:16:41.426 "trtype": "VFIOUSER" 00:16:41.426 } 00:16:41.426 ], 00:16:41.426 "max_cntlid": 65519, 00:16:41.426 "max_namespaces": 32, 00:16:41.426 "min_cntlid": 1, 00:16:41.426 "model_number": "SPDK bdev Controller", 00:16:41.426 "namespaces": [ 00:16:41.426 { 00:16:41.426 "bdev_name": "Malloc1", 00:16:41.426 "name": "Malloc1", 00:16:41.426 "nguid": "F77A7E7EE7FB4770A8500ED8555A12D4", 00:16:41.426 "nsid": 1, 00:16:41.426 "uuid": "f77a7e7e-e7fb-4770-a850-0ed8555a12d4" 00:16:41.426 } 00:16:41.426 ], 00:16:41.426 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:41.426 "serial_number": "SPDK1", 00:16:41.426 "subtype": "NVMe" 00:16:41.426 }, 00:16:41.426 { 00:16:41.426 "allow_any_host": true, 00:16:41.426 "hosts": [], 00:16:41.426 "listen_addresses": [ 00:16:41.426 { 00:16:41.426 "adrfam": "IPv4", 00:16:41.426 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:41.426 "trsvcid": "0", 00:16:41.426 "trtype": "VFIOUSER" 00:16:41.426 } 00:16:41.426 ], 00:16:41.426 "max_cntlid": 65519, 00:16:41.426 "max_namespaces": 32, 00:16:41.426 "min_cntlid": 1, 00:16:41.426 "model_number": "SPDK bdev Controller", 00:16:41.426 "namespaces": [ 00:16:41.426 { 00:16:41.426 "bdev_name": "Malloc2", 00:16:41.426 "name": "Malloc2", 00:16:41.426 "nguid": "285976939BA447DBA4FF22D423E12392", 00:16:41.426 "nsid": 1, 00:16:41.426 "uuid": "28597693-9ba4-47db-a4ff-22d423e12392" 00:16:41.426 } 00:16:41.426 ], 00:16:41.426 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:41.426 "serial_number": "SPDK2", 00:16:41.426 "subtype": "NVMe" 00:16:41.426 } 00:16:41.426 ] 00:16:41.426 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:41.426 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=81221 00:16:41.426 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:41.427 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # local i=0 00:16:41.427 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:16:41.427 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:41.427 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # '[' 0 -lt 200 ']' 00:16:41.427 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # i=1 00:16:41.427 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # sleep 0.1 00:16:41.685 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:41.685 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # '[' 1 -lt 200 ']' 00:16:41.685 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # i=2 00:16:41.685 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # sleep 0.1 00:16:41.685 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:41.685 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # '[' 2 -lt 200 ']' 00:16:41.685 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # i=3 00:16:41.685 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # sleep 0.1 00:16:41.685 [2024-11-06 14:21:09.290427] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:41.942 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:41.942 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:41.942 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1278 -- # return 0 00:16:41.942 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:41.942 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:16:42.207 Malloc3 00:16:42.208 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:16:42.471 [2024-11-06 14:21:09.850978] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:42.471 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:42.471 Asynchronous Event Request test 00:16:42.471 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:42.471 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:42.471 Registering asynchronous event callbacks... 00:16:42.471 Starting namespace attribute notice tests for all controllers... 00:16:42.471 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:42.471 aer_cb - Changed Namespace 00:16:42.471 Cleaning up... 00:16:42.471 [ 00:16:42.471 { 00:16:42.471 "allow_any_host": true, 00:16:42.471 "hosts": [], 00:16:42.471 "listen_addresses": [], 00:16:42.471 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:42.471 "subtype": "Discovery" 00:16:42.471 }, 00:16:42.471 { 00:16:42.471 "allow_any_host": true, 00:16:42.471 "hosts": [], 00:16:42.471 "listen_addresses": [ 00:16:42.471 { 00:16:42.471 "adrfam": "IPv4", 00:16:42.471 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:42.471 "trsvcid": "0", 00:16:42.471 "trtype": "VFIOUSER" 00:16:42.471 } 00:16:42.471 ], 00:16:42.471 "max_cntlid": 65519, 00:16:42.471 "max_namespaces": 32, 00:16:42.471 "min_cntlid": 1, 00:16:42.471 "model_number": "SPDK bdev Controller", 00:16:42.471 "namespaces": [ 00:16:42.471 { 00:16:42.471 "bdev_name": "Malloc1", 00:16:42.471 "name": "Malloc1", 00:16:42.471 "nguid": "F77A7E7EE7FB4770A8500ED8555A12D4", 00:16:42.471 "nsid": 1, 00:16:42.471 "uuid": "f77a7e7e-e7fb-4770-a850-0ed8555a12d4" 00:16:42.471 }, 00:16:42.471 { 00:16:42.471 "bdev_name": "Malloc3", 00:16:42.471 "name": "Malloc3", 00:16:42.471 "nguid": "5483B249A08A4B47BF0D8D3C4FE4AF2C", 00:16:42.471 "nsid": 2, 00:16:42.471 "uuid": "5483b249-a08a-4b47-bf0d-8d3c4fe4af2c" 00:16:42.471 } 00:16:42.471 ], 00:16:42.471 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:42.471 "serial_number": "SPDK1", 00:16:42.471 "subtype": "NVMe" 00:16:42.471 }, 00:16:42.471 { 00:16:42.471 "allow_any_host": true, 00:16:42.471 "hosts": [], 00:16:42.471 "listen_addresses": [ 00:16:42.471 { 00:16:42.471 "adrfam": "IPv4", 00:16:42.471 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:42.471 "trsvcid": "0", 00:16:42.471 "trtype": "VFIOUSER" 00:16:42.471 } 00:16:42.471 ], 00:16:42.471 "max_cntlid": 65519, 00:16:42.471 "max_namespaces": 32, 00:16:42.471 "min_cntlid": 1, 00:16:42.471 "model_number": "SPDK bdev Controller", 00:16:42.471 "namespaces": [ 00:16:42.471 { 00:16:42.471 "bdev_name": "Malloc2", 00:16:42.471 "name": "Malloc2", 00:16:42.471 "nguid": "285976939BA447DBA4FF22D423E12392", 00:16:42.471 "nsid": 1, 00:16:42.471 "uuid": "28597693-9ba4-47db-a4ff-22d423e12392" 00:16:42.471 } 00:16:42.471 ], 00:16:42.471 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:42.471 "serial_number": "SPDK2", 00:16:42.471 "subtype": "NVMe" 00:16:42.471 } 00:16:42.471 ] 00:16:42.471 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 81221 00:16:42.471 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:42.471 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:42.471 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:16:42.471 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:42.729 [2024-11-06 14:21:10.110644] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:16:42.729 [2024-11-06 14:21:10.110734] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81260 ] 00:16:42.729 [2024-11-06 14:21:10.278136] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:16:42.729 [2024-11-06 14:21:10.289338] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:42.729 [2024-11-06 14:21:10.289369] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f5d437f0000 00:16:42.729 [2024-11-06 14:21:10.290329] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:42.729 [2024-11-06 14:21:10.291335] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:42.729 [2024-11-06 14:21:10.292341] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:42.729 [2024-11-06 14:21:10.293356] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:42.729 [2024-11-06 14:21:10.294357] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:42.729 [2024-11-06 14:21:10.295358] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:42.729 [2024-11-06 14:21:10.296369] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:42.729 [2024-11-06 14:21:10.297383] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:42.729 [2024-11-06 14:21:10.298388] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:42.729 [2024-11-06 14:21:10.298405] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f5d437e5000 00:16:42.729 [2024-11-06 14:21:10.299521] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:42.729 [2024-11-06 14:21:10.312565] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:16:42.729 [2024-11-06 14:21:10.312612] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:16:42.729 [2024-11-06 14:21:10.316685] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:42.729 [2024-11-06 14:21:10.316777] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:42.729 [2024-11-06 14:21:10.317180] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:16:42.729 [2024-11-06 14:21:10.317206] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:16:42.729 [2024-11-06 14:21:10.317212] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:16:42.729 [2024-11-06 14:21:10.317680] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:16:42.729 [2024-11-06 14:21:10.317696] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:16:42.729 [2024-11-06 14:21:10.317709] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:16:42.729 [2024-11-06 14:21:10.318676] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:42.729 [2024-11-06 14:21:10.318695] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:16:42.729 [2024-11-06 14:21:10.318704] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:16:42.729 [2024-11-06 14:21:10.319690] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:16:42.729 [2024-11-06 14:21:10.319711] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:42.729 [2024-11-06 14:21:10.320694] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:16:42.729 [2024-11-06 14:21:10.320711] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:16:42.729 [2024-11-06 14:21:10.320718] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:16:42.729 [2024-11-06 14:21:10.320729] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:42.729 [2024-11-06 14:21:10.320837] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:16:42.730 [2024-11-06 14:21:10.320842] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:42.730 [2024-11-06 14:21:10.320849] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:16:42.730 [2024-11-06 14:21:10.321699] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:16:42.730 [2024-11-06 14:21:10.322693] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:16:42.730 [2024-11-06 14:21:10.323699] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:42.730 [2024-11-06 14:21:10.324705] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:42.730 [2024-11-06 14:21:10.324763] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:42.730 [2024-11-06 14:21:10.325709] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:16:42.730 [2024-11-06 14:21:10.325731] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:42.730 [2024-11-06 14:21:10.325737] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:16:42.730 [2024-11-06 14:21:10.325758] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:16:42.730 [2024-11-06 14:21:10.325767] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:16:42.730 [2024-11-06 14:21:10.325786] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:42.730 [2024-11-06 14:21:10.325792] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:42.730 [2024-11-06 14:21:10.325799] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:42.730 [2024-11-06 14:21:10.325813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:42.730 [2024-11-06 14:21:10.333973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:42.730 [2024-11-06 14:21:10.334001] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:16:42.730 [2024-11-06 14:21:10.334009] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:16:42.730 [2024-11-06 14:21:10.334014] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:16:42.730 [2024-11-06 14:21:10.334021] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:42.730 [2024-11-06 14:21:10.334026] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:16:42.730 [2024-11-06 14:21:10.334034] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:16:42.730 [2024-11-06 14:21:10.334039] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:16:42.730 [2024-11-06 14:21:10.334053] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:16:42.730 [2024-11-06 14:21:10.334065] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:42.730 [2024-11-06 14:21:10.340054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:42.730 [2024-11-06 14:21:10.340080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:42.730 [2024-11-06 14:21:10.340092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:42.730 [2024-11-06 14:21:10.340100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:42.730 [2024-11-06 14:21:10.340110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:42.730 [2024-11-06 14:21:10.340115] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:16:42.730 [2024-11-06 14:21:10.340133] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:42.730 [2024-11-06 14:21:10.340142] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:42.730 [2024-11-06 14:21:10.347968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:42.730 [2024-11-06 14:21:10.347987] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:16:42.730 [2024-11-06 14:21:10.347996] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:42.730 [2024-11-06 14:21:10.348008] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:16:42.730 [2024-11-06 14:21:10.348017] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:16:42.730 [2024-11-06 14:21:10.348028] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:42.730 [2024-11-06 14:21:10.355965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:42.730 [2024-11-06 14:21:10.356045] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:16:42.730 [2024-11-06 14:21:10.356060] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:16:42.730 [2024-11-06 14:21:10.356072] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:42.730 [2024-11-06 14:21:10.356079] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:42.730 [2024-11-06 14:21:10.356084] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:42.730 [2024-11-06 14:21:10.356093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:42.988 [2024-11-06 14:21:10.364961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:42.988 [2024-11-06 14:21:10.364997] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:16:42.988 [2024-11-06 14:21:10.365010] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:16:42.988 [2024-11-06 14:21:10.365025] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:16:42.988 [2024-11-06 14:21:10.365036] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:42.989 [2024-11-06 14:21:10.365044] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:42.989 [2024-11-06 14:21:10.365049] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:42.989 [2024-11-06 14:21:10.365058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:42.989 [2024-11-06 14:21:10.372966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:42.989 [2024-11-06 14:21:10.373004] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:42.989 [2024-11-06 14:21:10.373017] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:42.989 [2024-11-06 14:21:10.373035] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:42.989 [2024-11-06 14:21:10.373039] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:42.989 [2024-11-06 14:21:10.373047] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:42.989 [2024-11-06 14:21:10.373056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:42.989 [2024-11-06 14:21:10.380965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:42.989 [2024-11-06 14:21:10.380997] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:42.989 [2024-11-06 14:21:10.381006] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:16:42.989 [2024-11-06 14:21:10.381018] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:16:42.989 [2024-11-06 14:21:10.381026] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:16:42.989 [2024-11-06 14:21:10.381031] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:42.989 [2024-11-06 14:21:10.381038] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:16:42.989 [2024-11-06 14:21:10.381043] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:16:42.989 [2024-11-06 14:21:10.381051] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:16:42.989 [2024-11-06 14:21:10.381057] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:16:42.989 [2024-11-06 14:21:10.381089] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:42.989 [2024-11-06 14:21:10.388960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:42.989 [2024-11-06 14:21:10.388989] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:42.989 [2024-11-06 14:21:10.396962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:42.989 [2024-11-06 14:21:10.396989] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:42.989 [2024-11-06 14:21:10.404962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:42.989 [2024-11-06 14:21:10.404989] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:42.989 [2024-11-06 14:21:10.412962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:42.989 [2024-11-06 14:21:10.412996] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:42.989 [2024-11-06 14:21:10.413005] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:42.989 [2024-11-06 14:21:10.413011] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:42.989 [2024-11-06 14:21:10.413016] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:42.989 [2024-11-06 14:21:10.413021] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:16:42.989 [2024-11-06 14:21:10.413029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:42.989 [2024-11-06 14:21:10.413039] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:42.989 [2024-11-06 14:21:10.413045] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:42.989 [2024-11-06 14:21:10.413052] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:42.989 [2024-11-06 14:21:10.413058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:42.989 [2024-11-06 14:21:10.413067] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:42.989 [2024-11-06 14:21:10.413072] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:42.989 [2024-11-06 14:21:10.413079] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:42.989 [2024-11-06 14:21:10.413085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:42.989 [2024-11-06 14:21:10.413096] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:42.989 [2024-11-06 14:21:10.413100] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:42.989 [2024-11-06 14:21:10.413105] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:42.989 [2024-11-06 14:21:10.413113] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:42.989 [2024-11-06 14:21:10.420965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:42.989 [2024-11-06 14:21:10.420996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:42.989 [2024-11-06 14:21:10.421007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:42.989 [2024-11-06 14:21:10.421015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:42.989 ===================================================== 00:16:42.989 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:42.989 ===================================================== 00:16:42.989 Controller Capabilities/Features 00:16:42.989 ================================ 00:16:42.989 Vendor ID: 4e58 00:16:42.989 Subsystem Vendor ID: 4e58 00:16:42.989 Serial Number: SPDK2 00:16:42.989 Model Number: SPDK bdev Controller 00:16:42.989 Firmware Version: 25.01 00:16:42.989 Recommended Arb Burst: 6 00:16:42.989 IEEE OUI Identifier: 8d 6b 50 00:16:42.989 Multi-path I/O 00:16:42.989 May have multiple subsystem ports: Yes 00:16:42.989 May have multiple controllers: Yes 00:16:42.989 Associated with SR-IOV VF: No 00:16:42.989 Max Data Transfer Size: 131072 00:16:42.989 Max Number of Namespaces: 32 00:16:42.989 Max Number of I/O Queues: 127 00:16:42.989 NVMe Specification Version (VS): 1.3 00:16:42.989 NVMe Specification Version (Identify): 1.3 00:16:42.989 Maximum Queue Entries: 256 00:16:42.989 Contiguous Queues Required: Yes 00:16:42.989 Arbitration Mechanisms Supported 00:16:42.989 Weighted Round Robin: Not Supported 00:16:42.989 Vendor Specific: Not Supported 00:16:42.989 Reset Timeout: 15000 ms 00:16:42.989 Doorbell Stride: 4 bytes 00:16:42.989 NVM Subsystem Reset: Not Supported 00:16:42.989 Command Sets Supported 00:16:42.989 NVM Command Set: Supported 00:16:42.989 Boot Partition: Not Supported 00:16:42.989 Memory Page Size Minimum: 4096 bytes 00:16:42.989 Memory Page Size Maximum: 4096 bytes 00:16:42.989 Persistent Memory Region: Not Supported 00:16:42.989 Optional Asynchronous Events Supported 00:16:42.989 Namespace Attribute Notices: Supported 00:16:42.989 Firmware Activation Notices: Not Supported 00:16:42.989 ANA Change Notices: Not Supported 00:16:42.989 PLE Aggregate Log Change Notices: Not Supported 00:16:42.989 LBA Status Info Alert Notices: Not Supported 00:16:42.989 EGE Aggregate Log Change Notices: Not Supported 00:16:42.989 Normal NVM Subsystem Shutdown event: Not Supported 00:16:42.989 Zone Descriptor Change Notices: Not Supported 00:16:42.989 Discovery Log Change Notices: Not Supported 00:16:42.989 Controller Attributes 00:16:42.989 128-bit Host Identifier: Supported 00:16:42.989 Non-Operational Permissive Mode: Not Supported 00:16:42.989 NVM Sets: Not Supported 00:16:42.989 Read Recovery Levels: Not Supported 00:16:42.989 Endurance Groups: Not Supported 00:16:42.989 Predictable Latency Mode: Not Supported 00:16:42.989 Traffic Based Keep ALive: Not Supported 00:16:42.989 Namespace Granularity: Not Supported 00:16:42.989 SQ Associations: Not Supported 00:16:42.989 UUID List: Not Supported 00:16:42.989 Multi-Domain Subsystem: Not Supported 00:16:42.989 Fixed Capacity Management: Not Supported 00:16:42.989 Variable Capacity Management: Not Supported 00:16:42.989 Delete Endurance Group: Not Supported 00:16:42.989 Delete NVM Set: Not Supported 00:16:42.989 Extended LBA Formats Supported: Not Supported 00:16:42.989 Flexible Data Placement Supported: Not Supported 00:16:42.989 00:16:42.989 Controller Memory Buffer Support 00:16:42.989 ================================ 00:16:42.989 Supported: No 00:16:42.989 00:16:42.989 Persistent Memory Region Support 00:16:42.989 ================================ 00:16:42.989 Supported: No 00:16:42.989 00:16:42.989 Admin Command Set Attributes 00:16:42.989 ============================ 00:16:42.989 Security Send/Receive: Not Supported 00:16:42.989 Format NVM: Not Supported 00:16:42.989 Firmware Activate/Download: Not Supported 00:16:42.989 Namespace Management: Not Supported 00:16:42.989 Device Self-Test: Not Supported 00:16:42.989 Directives: Not Supported 00:16:42.989 NVMe-MI: Not Supported 00:16:42.989 Virtualization Management: Not Supported 00:16:42.989 Doorbell Buffer Config: Not Supported 00:16:42.989 Get LBA Status Capability: Not Supported 00:16:42.989 Command & Feature Lockdown Capability: Not Supported 00:16:42.989 Abort Command Limit: 4 00:16:42.989 Async Event Request Limit: 4 00:16:42.989 Number of Firmware Slots: N/A 00:16:42.989 Firmware Slot 1 Read-Only: N/A 00:16:42.989 Firmware Activation Without Reset: N/A 00:16:42.989 Multiple Update Detection Support: N/A 00:16:42.989 Firmware Update Granularity: No Information Provided 00:16:42.989 Per-Namespace SMART Log: No 00:16:42.989 Asymmetric Namespace Access Log Page: Not Supported 00:16:42.989 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:16:42.989 Command Effects Log Page: Supported 00:16:42.989 Get Log Page Extended Data: Supported 00:16:42.989 Telemetry Log Pages: Not Supported 00:16:42.989 Persistent Event Log Pages: Not Supported 00:16:42.989 Supported Log Pages Log Page: May Support 00:16:42.989 Commands Supported & Effects Log Page: Not Supported 00:16:42.989 Feature Identifiers & Effects Log Page:May Support 00:16:42.989 NVMe-MI Commands & Effects Log Page: May Support 00:16:42.989 Data Area 4 for Telemetry Log: Not Supported 00:16:42.989 Error Log Page Entries Supported: 128 00:16:42.989 Keep Alive: Supported 00:16:42.989 Keep Alive Granularity: 10000 ms 00:16:42.989 00:16:42.989 NVM Command Set Attributes 00:16:42.989 ========================== 00:16:42.989 Submission Queue Entry Size 00:16:42.989 Max: 64 00:16:42.989 Min: 64 00:16:42.989 Completion Queue Entry Size 00:16:42.989 Max: 16 00:16:42.989 Min: 16 00:16:42.989 Number of Namespaces: 32 00:16:42.989 Compare Command: Supported 00:16:42.989 Write Uncorrectable Command: Not Supported 00:16:42.989 Dataset Management Command: Supported 00:16:42.989 Write Zeroes Command: Supported 00:16:42.990 Set Features Save Field: Not Supported 00:16:42.990 Reservations: Not Supported 00:16:42.990 Timestamp: Not Supported 00:16:42.990 Copy: Supported 00:16:42.990 Volatile Write Cache: Present 00:16:42.990 Atomic Write Unit (Normal): 1 00:16:42.990 Atomic Write Unit (PFail): 1 00:16:42.990 Atomic Compare & Write Unit: 1 00:16:42.990 Fused Compare & Write: Supported 00:16:42.990 Scatter-Gather List 00:16:42.990 SGL Command Set: Supported (Dword aligned) 00:16:42.990 SGL Keyed: Not Supported 00:16:42.990 SGL Bit Bucket Descriptor: Not Supported 00:16:42.990 SGL Metadata Pointer: Not Supported 00:16:42.990 Oversized SGL: Not Supported 00:16:42.990 SGL Metadata Address: Not Supported 00:16:42.990 SGL Offset: Not Supported 00:16:42.990 Transport SGL Data Block: Not Supported 00:16:42.990 Replay Protected Memory Block: Not Supported 00:16:42.990 00:16:42.990 Firmware Slot Information 00:16:42.990 ========================= 00:16:42.990 Active slot: 1 00:16:42.990 Slot 1 Firmware Revision: 25.01 00:16:42.990 00:16:42.990 00:16:42.990 Commands Supported and Effects 00:16:42.990 ============================== 00:16:42.990 Admin Commands 00:16:42.990 -------------- 00:16:42.990 Get Log Page (02h): Supported 00:16:42.990 Identify (06h): Supported 00:16:42.990 Abort (08h): Supported 00:16:42.990 Set Features (09h): Supported 00:16:42.990 Get Features (0Ah): Supported 00:16:42.990 Asynchronous Event Request (0Ch): Supported 00:16:42.990 Keep Alive (18h): Supported 00:16:42.990 I/O Commands 00:16:42.990 ------------ 00:16:42.990 Flush (00h): Supported LBA-Change 00:16:42.990 Write (01h): Supported LBA-Change 00:16:42.990 Read (02h): Supported 00:16:42.990 Compare (05h): Supported 00:16:42.990 Write Zeroes (08h): Supported LBA-Change 00:16:42.990 Dataset Management (09h): Supported LBA-Change 00:16:42.990 Copy (19h): Supported LBA-Change 00:16:42.990 00:16:42.990 Error Log 00:16:42.990 ========= 00:16:42.990 00:16:42.990 Arbitration 00:16:42.990 =========== 00:16:42.990 Arbitration Burst: 1 00:16:42.990 00:16:42.990 Power Management 00:16:42.990 ================ 00:16:42.990 Number of Power States: 1 00:16:42.990 Current Power State: Power State #0 00:16:42.990 Power State #0: 00:16:42.990 Max Power: 0.00 W 00:16:42.990 Non-Operational State: Operational 00:16:42.990 Entry Latency: Not Reported 00:16:42.990 Exit Latency: Not Reported 00:16:42.990 Relative Read Throughput: 0 00:16:42.990 Relative Read Latency: 0 00:16:42.990 Relative Write Throughput: 0 00:16:42.990 Relative Write Latency: 0 00:16:42.990 Idle Power: Not Reported 00:16:42.990 Active Power: Not Reported 00:16:42.990 Non-Operational Permissive Mode: Not Supported 00:16:42.990 00:16:42.990 Health Information 00:16:42.990 ================== 00:16:42.990 Critical Warnings: 00:16:42.990 Available Spare Space: OK 00:16:42.990 Temperature: OK 00:16:42.990 Device Reliability: OK 00:16:42.990 Read Only: No 00:16:42.990 Volatile Memory Backup: OK 00:16:42.990 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:42.990 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:42.990 Available Spare: 0% 00:16:42.990 Available Sp[2024-11-06 14:21:10.421141] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:42.990 [2024-11-06 14:21:10.428962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:42.990 [2024-11-06 14:21:10.429034] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:16:42.990 [2024-11-06 14:21:10.429046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.990 [2024-11-06 14:21:10.429055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.990 [2024-11-06 14:21:10.429061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.990 [2024-11-06 14:21:10.429068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.990 [2024-11-06 14:21:10.429121] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:42.990 [2024-11-06 14:21:10.429137] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:16:42.990 [2024-11-06 14:21:10.430135] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:42.990 [2024-11-06 14:21:10.430202] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:16:42.990 [2024-11-06 14:21:10.430212] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:16:42.990 [2024-11-06 14:21:10.431121] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:16:42.990 [2024-11-06 14:21:10.431144] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:16:42.990 [2024-11-06 14:21:10.431550] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:16:42.990 [2024-11-06 14:21:10.432627] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:42.990 are Threshold: 0% 00:16:42.990 Life Percentage Used: 0% 00:16:42.990 Data Units Read: 0 00:16:42.990 Data Units Written: 0 00:16:42.990 Host Read Commands: 0 00:16:42.990 Host Write Commands: 0 00:16:42.990 Controller Busy Time: 0 minutes 00:16:42.990 Power Cycles: 0 00:16:42.990 Power On Hours: 0 hours 00:16:42.990 Unsafe Shutdowns: 0 00:16:42.990 Unrecoverable Media Errors: 0 00:16:42.990 Lifetime Error Log Entries: 0 00:16:42.990 Warning Temperature Time: 0 minutes 00:16:42.990 Critical Temperature Time: 0 minutes 00:16:42.990 00:16:42.990 Number of Queues 00:16:42.990 ================ 00:16:42.990 Number of I/O Submission Queues: 127 00:16:42.990 Number of I/O Completion Queues: 127 00:16:42.990 00:16:42.990 Active Namespaces 00:16:42.990 ================= 00:16:42.990 Namespace ID:1 00:16:42.990 Error Recovery Timeout: Unlimited 00:16:42.990 Command Set Identifier: NVM (00h) 00:16:42.990 Deallocate: Supported 00:16:42.990 Deallocated/Unwritten Error: Not Supported 00:16:42.990 Deallocated Read Value: Unknown 00:16:42.990 Deallocate in Write Zeroes: Not Supported 00:16:42.990 Deallocated Guard Field: 0xFFFF 00:16:42.990 Flush: Supported 00:16:42.990 Reservation: Supported 00:16:42.990 Namespace Sharing Capabilities: Multiple Controllers 00:16:42.990 Size (in LBAs): 131072 (0GiB) 00:16:42.990 Capacity (in LBAs): 131072 (0GiB) 00:16:42.990 Utilization (in LBAs): 131072 (0GiB) 00:16:42.990 NGUID: 285976939BA447DBA4FF22D423E12392 00:16:42.990 UUID: 28597693-9ba4-47db-a4ff-22d423e12392 00:16:42.990 Thin Provisioning: Not Supported 00:16:42.990 Per-NS Atomic Units: Yes 00:16:42.990 Atomic Boundary Size (Normal): 0 00:16:42.990 Atomic Boundary Size (PFail): 0 00:16:42.990 Atomic Boundary Offset: 0 00:16:42.990 Maximum Single Source Range Length: 65535 00:16:42.990 Maximum Copy Length: 65535 00:16:42.990 Maximum Source Range Count: 1 00:16:42.990 NGUID/EUI64 Never Reused: No 00:16:42.990 Namespace Write Protected: No 00:16:42.990 Number of LBA Formats: 1 00:16:42.990 Current LBA Format: LBA Format #00 00:16:42.990 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:42.990 00:16:42.990 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:43.247 [2024-11-06 14:21:10.781091] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:48.551 Initializing NVMe Controllers 00:16:48.551 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:48.551 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:48.551 Initialization complete. Launching workers. 00:16:48.551 ======================================================== 00:16:48.551 Latency(us) 00:16:48.551 Device Information : IOPS MiB/s Average min max 00:16:48.551 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 35654.89 139.28 3591.80 971.70 13655.22 00:16:48.551 ======================================================== 00:16:48.551 Total : 35654.89 139.28 3591.80 971.70 13655.22 00:16:48.551 00:16:48.551 [2024-11-06 14:21:15.883750] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:48.551 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:48.809 [2024-11-06 14:21:16.243674] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:54.224 Initializing NVMe Controllers 00:16:54.224 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:54.224 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:54.224 Initialization complete. Launching workers. 00:16:54.224 ======================================================== 00:16:54.224 Latency(us) 00:16:54.224 Device Information : IOPS MiB/s Average min max 00:16:54.224 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39863.00 155.71 3210.83 999.09 6630.58 00:16:54.224 ======================================================== 00:16:54.224 Total : 39863.00 155.71 3210.83 999.09 6630.58 00:16:54.224 00:16:54.224 [2024-11-06 14:21:21.253462] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:54.224 14:21:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /home/vagrant/spdk_repo/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:54.224 [2024-11-06 14:21:21.582205] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:59.501 [2024-11-06 14:21:26.704569] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:59.501 Initializing NVMe Controllers 00:16:59.501 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:59.501 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:59.501 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:16:59.501 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:16:59.501 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:16:59.501 Initialization complete. Launching workers. 00:16:59.501 Starting thread on core 2 00:16:59.501 Starting thread on core 1 00:16:59.501 Starting thread on core 3 00:16:59.501 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:16:59.501 [2024-11-06 14:21:27.060570] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:02.790 [2024-11-06 14:21:30.214522] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:02.791 Initializing NVMe Controllers 00:17:02.791 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:02.791 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:02.791 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:17:02.791 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:17:02.791 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:17:02.791 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:17:02.791 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:17:02.791 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:17:02.791 Initialization complete. Launching workers. 00:17:02.791 Starting thread on core 1 with urgent priority queue 00:17:02.791 Starting thread on core 2 with urgent priority queue 00:17:02.791 Starting thread on core 3 with urgent priority queue 00:17:02.791 Starting thread on core 0 with urgent priority queue 00:17:02.791 SPDK bdev Controller (SPDK2 ) core 0: 1194.67 IO/s 83.71 secs/100000 ios 00:17:02.791 SPDK bdev Controller (SPDK2 ) core 1: 1066.67 IO/s 93.75 secs/100000 ios 00:17:02.791 SPDK bdev Controller (SPDK2 ) core 2: 1066.67 IO/s 93.75 secs/100000 ios 00:17:02.791 SPDK bdev Controller (SPDK2 ) core 3: 1066.67 IO/s 93.75 secs/100000 ios 00:17:02.791 ======================================================== 00:17:02.791 00:17:02.791 14:21:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:17:03.049 [2024-11-06 14:21:30.581631] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:03.049 Initializing NVMe Controllers 00:17:03.049 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:03.049 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:03.049 Namespace ID: 1 size: 0GB 00:17:03.049 Initialization complete. 00:17:03.049 INFO: using host memory buffer for IO 00:17:03.049 Hello world! 00:17:03.049 [2024-11-06 14:21:30.589871] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:03.049 14:21:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:17:03.618 [2024-11-06 14:21:30.953529] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:04.553 Initializing NVMe Controllers 00:17:04.553 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:04.553 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:04.553 Initialization complete. Launching workers. 00:17:04.553 submit (in ns) avg, min, max = 9067.5, 3768.5, 4016780.8 00:17:04.553 complete (in ns) avg, min, max = 21728.9, 1861.5, 4018900.8 00:17:04.553 00:17:04.553 Submit histogram 00:17:04.553 ================ 00:17:04.553 Range in us Cumulative Count 00:17:04.553 3.766 - 3.791: 0.0067% ( 1) 00:17:04.553 3.791 - 3.815: 0.0202% ( 2) 00:17:04.553 3.815 - 3.840: 0.0336% ( 2) 00:17:04.553 3.840 - 3.865: 0.2488% ( 32) 00:17:04.553 3.865 - 3.889: 0.6322% ( 57) 00:17:04.553 3.889 - 3.914: 1.8159% ( 176) 00:17:04.553 3.914 - 3.938: 4.7549% ( 437) 00:17:04.553 3.938 - 3.963: 10.1016% ( 795) 00:17:04.553 3.963 - 3.988: 15.6366% ( 823) 00:17:04.553 3.988 - 4.012: 21.2052% ( 828) 00:17:04.553 4.012 - 4.037: 27.4195% ( 924) 00:17:04.553 4.037 - 4.062: 33.5127% ( 906) 00:17:04.553 4.062 - 4.086: 39.7404% ( 926) 00:17:04.553 4.086 - 4.111: 46.6003% ( 1020) 00:17:04.553 4.111 - 4.135: 54.0722% ( 1111) 00:17:04.553 4.135 - 4.160: 60.7371% ( 991) 00:17:04.553 4.160 - 4.185: 66.3595% ( 836) 00:17:04.553 4.185 - 4.209: 70.7647% ( 655) 00:17:04.553 4.209 - 4.234: 74.6654% ( 580) 00:17:04.553 4.234 - 4.258: 77.9676% ( 491) 00:17:04.553 4.258 - 4.283: 80.7317% ( 411) 00:17:04.553 4.283 - 4.308: 83.0184% ( 340) 00:17:04.553 4.308 - 4.332: 85.0763% ( 306) 00:17:04.553 4.332 - 4.357: 87.0402% ( 292) 00:17:04.553 4.357 - 4.382: 88.6677% ( 242) 00:17:04.553 4.382 - 4.406: 90.0397% ( 204) 00:17:04.553 4.406 - 4.431: 91.0888% ( 156) 00:17:04.553 4.431 - 4.455: 92.1918% ( 164) 00:17:04.553 4.455 - 4.480: 93.0863% ( 133) 00:17:04.553 4.480 - 4.505: 94.0884% ( 149) 00:17:04.553 4.505 - 4.529: 94.8080% ( 107) 00:17:04.553 4.529 - 4.554: 95.4335% ( 93) 00:17:04.553 4.554 - 4.578: 96.0118% ( 86) 00:17:04.553 4.578 - 4.603: 96.4557% ( 66) 00:17:04.553 4.603 - 4.628: 96.7651% ( 46) 00:17:04.553 4.628 - 4.652: 97.0072% ( 36) 00:17:04.553 4.652 - 4.677: 97.2829% ( 41) 00:17:04.553 4.677 - 4.702: 97.4578% ( 26) 00:17:04.553 4.702 - 4.726: 97.6461% ( 28) 00:17:04.553 4.726 - 4.751: 97.7201% ( 11) 00:17:04.553 4.751 - 4.775: 97.7739% ( 8) 00:17:04.553 4.775 - 4.800: 97.8411% ( 10) 00:17:04.553 4.800 - 4.825: 97.9084% ( 10) 00:17:04.553 4.825 - 4.849: 97.9689% ( 9) 00:17:04.553 4.849 - 4.874: 97.9891% ( 3) 00:17:04.553 4.874 - 4.898: 98.0227% ( 5) 00:17:04.553 4.898 - 4.923: 98.0496% ( 4) 00:17:04.553 4.923 - 4.948: 98.0698% ( 3) 00:17:04.553 4.948 - 4.972: 98.0765% ( 1) 00:17:04.553 4.972 - 4.997: 98.0900% ( 2) 00:17:04.553 4.997 - 5.022: 98.1169% ( 4) 00:17:04.553 5.046 - 5.071: 98.1371% ( 3) 00:17:04.553 5.095 - 5.120: 98.1438% ( 1) 00:17:04.553 5.120 - 5.145: 98.1572% ( 2) 00:17:04.553 5.194 - 5.218: 98.1640% ( 1) 00:17:04.553 5.218 - 5.243: 98.1707% ( 1) 00:17:04.553 5.366 - 5.391: 98.1841% ( 2) 00:17:04.553 5.415 - 5.440: 98.1909% ( 1) 00:17:04.553 5.465 - 5.489: 98.1976% ( 1) 00:17:04.553 5.514 - 5.538: 98.2043% ( 1) 00:17:04.553 5.538 - 5.563: 98.2245% ( 3) 00:17:04.553 5.563 - 5.588: 98.2379% ( 2) 00:17:04.553 5.588 - 5.612: 98.2581% ( 3) 00:17:04.553 5.612 - 5.637: 98.2716% ( 2) 00:17:04.553 5.637 - 5.662: 98.2917% ( 3) 00:17:04.553 5.662 - 5.686: 98.2985% ( 1) 00:17:04.553 5.686 - 5.711: 98.3119% ( 2) 00:17:04.553 5.711 - 5.735: 98.3657% ( 8) 00:17:04.553 5.735 - 5.760: 98.3792% ( 2) 00:17:04.553 5.760 - 5.785: 98.4061% ( 4) 00:17:04.553 5.785 - 5.809: 98.4464% ( 6) 00:17:04.553 5.809 - 5.834: 98.4868% ( 6) 00:17:04.553 5.834 - 5.858: 98.5137% ( 4) 00:17:04.553 5.858 - 5.883: 98.5473% ( 5) 00:17:04.553 5.883 - 5.908: 98.5742% ( 4) 00:17:04.553 5.908 - 5.932: 98.5877% ( 2) 00:17:04.553 5.932 - 5.957: 98.6146% ( 4) 00:17:04.553 5.957 - 5.982: 98.6213% ( 1) 00:17:04.553 5.982 - 6.006: 98.6280% ( 1) 00:17:04.553 6.006 - 6.031: 98.6549% ( 4) 00:17:04.553 6.031 - 6.055: 98.6684% ( 2) 00:17:04.553 6.055 - 6.080: 98.6953% ( 4) 00:17:04.553 6.080 - 6.105: 98.7491% ( 8) 00:17:04.553 6.105 - 6.129: 98.7693% ( 3) 00:17:04.553 6.129 - 6.154: 98.7760% ( 1) 00:17:04.553 6.154 - 6.178: 98.7962% ( 3) 00:17:04.553 6.178 - 6.203: 98.8096% ( 2) 00:17:04.553 6.203 - 6.228: 98.8231% ( 2) 00:17:04.553 6.228 - 6.252: 98.8365% ( 2) 00:17:04.553 6.252 - 6.277: 98.8567% ( 3) 00:17:04.553 6.277 - 6.302: 98.8701% ( 2) 00:17:04.553 6.302 - 6.351: 98.8970% ( 4) 00:17:04.553 6.351 - 6.400: 98.9441% ( 7) 00:17:04.553 6.400 - 6.449: 98.9643% ( 3) 00:17:04.553 6.449 - 6.498: 98.9710% ( 1) 00:17:04.553 6.498 - 6.548: 98.9912% ( 3) 00:17:04.553 6.597 - 6.646: 98.9979% ( 1) 00:17:04.553 6.646 - 6.695: 99.0046% ( 1) 00:17:04.553 6.745 - 6.794: 99.0181% ( 2) 00:17:04.553 6.794 - 6.843: 99.0248% ( 1) 00:17:04.553 6.843 - 6.892: 99.0383% ( 2) 00:17:04.553 6.892 - 6.942: 99.0517% ( 2) 00:17:04.553 6.942 - 6.991: 99.0786% ( 4) 00:17:04.553 6.991 - 7.040: 99.0988% ( 3) 00:17:04.553 7.040 - 7.089: 99.1190% ( 3) 00:17:04.553 7.089 - 7.138: 99.1593% ( 6) 00:17:04.553 7.138 - 7.188: 99.1862% ( 4) 00:17:04.553 7.237 - 7.286: 99.2131% ( 4) 00:17:04.553 7.286 - 7.335: 99.2199% ( 1) 00:17:04.553 7.335 - 7.385: 99.2535% ( 5) 00:17:04.553 7.434 - 7.483: 99.2737% ( 3) 00:17:04.553 7.483 - 7.532: 99.2871% ( 2) 00:17:04.553 7.532 - 7.582: 99.3006% ( 2) 00:17:04.553 7.582 - 7.631: 99.3073% ( 1) 00:17:04.553 7.631 - 7.680: 99.3140% ( 1) 00:17:04.553 7.680 - 7.729: 99.3207% ( 1) 00:17:04.553 7.729 - 7.778: 99.3275% ( 1) 00:17:04.553 7.828 - 7.877: 99.3342% ( 1) 00:17:04.553 7.975 - 8.025: 99.3476% ( 2) 00:17:04.553 8.025 - 8.074: 99.3544% ( 1) 00:17:04.553 8.320 - 8.369: 99.3813% ( 4) 00:17:04.553 8.566 - 8.615: 99.3947% ( 2) 00:17:04.553 8.665 - 8.714: 99.4014% ( 1) 00:17:04.553 8.714 - 8.763: 99.4082% ( 1) 00:17:04.553 8.763 - 8.812: 99.4149% ( 1) 00:17:04.553 8.812 - 8.862: 99.4216% ( 1) 00:17:04.553 9.058 - 9.108: 99.4283% ( 1) 00:17:04.554 9.108 - 9.157: 99.4485% ( 3) 00:17:04.554 9.255 - 9.305: 99.4552% ( 1) 00:17:04.554 9.403 - 9.452: 99.4687% ( 2) 00:17:04.554 9.797 - 9.846: 99.4889% ( 3) 00:17:04.554 9.846 - 9.895: 99.5090% ( 3) 00:17:04.554 9.895 - 9.945: 99.5158% ( 1) 00:17:04.554 10.043 - 10.092: 99.5225% ( 1) 00:17:04.554 10.092 - 10.142: 99.5292% ( 1) 00:17:04.554 10.240 - 10.289: 99.5359% ( 1) 00:17:04.554 10.289 - 10.338: 99.5427% ( 1) 00:17:04.554 10.338 - 10.388: 99.5494% ( 1) 00:17:04.554 10.388 - 10.437: 99.5561% ( 1) 00:17:04.554 10.437 - 10.486: 99.5628% ( 1) 00:17:04.554 10.486 - 10.535: 99.5696% ( 1) 00:17:04.554 10.535 - 10.585: 99.5763% ( 1) 00:17:04.554 10.782 - 10.831: 99.5965% ( 3) 00:17:04.554 10.831 - 10.880: 99.6032% ( 1) 00:17:04.554 10.929 - 10.978: 99.6099% ( 1) 00:17:04.554 11.028 - 11.077: 99.6167% ( 1) 00:17:04.554 11.077 - 11.126: 99.6234% ( 1) 00:17:04.554 11.126 - 11.175: 99.6436% ( 3) 00:17:04.554 11.175 - 11.225: 99.6570% ( 2) 00:17:04.554 11.422 - 11.471: 99.6637% ( 1) 00:17:04.554 11.471 - 11.520: 99.6705% ( 1) 00:17:04.554 11.766 - 11.815: 99.6772% ( 1) 00:17:04.554 11.914 - 11.963: 99.6906% ( 2) 00:17:04.554 12.012 - 12.062: 99.6974% ( 1) 00:17:04.554 12.160 - 12.209: 99.7041% ( 1) 00:17:04.554 12.505 - 12.554: 99.7108% ( 1) 00:17:04.554 12.800 - 12.898: 99.7175% ( 1) 00:17:04.554 12.997 - 13.095: 99.7243% ( 1) 00:17:04.554 14.375 - 14.474: 99.7310% ( 1) 00:17:04.554 14.474 - 14.572: 99.7377% ( 1) 00:17:04.554 14.769 - 14.868: 99.7444% ( 1) 00:17:04.554 14.966 - 15.065: 99.7512% ( 1) 00:17:04.554 15.065 - 15.163: 99.7646% ( 2) 00:17:04.554 15.262 - 15.360: 99.7781% ( 2) 00:17:04.554 15.360 - 15.458: 99.7848% ( 1) 00:17:04.554 15.458 - 15.557: 99.7915% ( 1) 00:17:04.554 15.557 - 15.655: 99.8050% ( 2) 00:17:04.554 15.655 - 15.754: 99.8117% ( 1) 00:17:04.554 15.754 - 15.852: 99.8184% ( 1) 00:17:04.554 15.852 - 15.951: 99.8251% ( 1) 00:17:04.554 16.049 - 16.148: 99.8386% ( 2) 00:17:04.554 16.148 - 16.246: 99.8453% ( 1) 00:17:04.554 16.345 - 16.443: 99.8520% ( 1) 00:17:04.554 16.738 - 16.837: 99.8588% ( 1) 00:17:04.554 17.231 - 17.329: 99.8655% ( 1) 00:17:04.554 17.428 - 17.526: 99.8722% ( 1) 00:17:04.554 22.843 - 22.942: 99.8789% ( 1) 00:17:04.554 4007.778 - 4032.985: 100.0000% ( 18) 00:17:04.554 00:17:04.554 Complete histogram 00:17:04.554 ================== 00:17:04.554 Range in us Cumulative Count 00:17:04.554 1.858 - 1.871: 0.0067% ( 1) 00:17:04.554 1.871 - 1.883: 0.0605% ( 8) 00:17:04.554 1.883 - 1.895: 0.1009% ( 6) 00:17:04.554 1.895 - 1.908: 0.1816% ( 12) 00:17:04.554 1.908 - 1.920: 1.0155% ( 124) 00:17:04.554 1.920 - 1.932: 3.8268% ( 418) 00:17:04.554 1.932 - 1.945: 7.5728% ( 557) 00:17:04.554 1.945 - 1.957: 10.9086% ( 496) 00:17:04.554 1.957 - 1.969: 13.4575% ( 379) 00:17:04.554 1.969 - 1.982: 19.2616% ( 863) 00:17:04.554 1.982 - 1.994: 35.2815% ( 2382) 00:17:04.554 1.994 - 2.006: 51.7251% ( 2445) 00:17:04.554 2.006 - 2.018: 66.7429% ( 2233) 00:17:04.554 2.018 - 2.031: 77.8129% ( 1646) 00:17:04.554 2.031 - 2.043: 84.4240% ( 983) 00:17:04.554 2.043 - 2.055: 88.2036% ( 562) 00:17:04.554 2.055 - 2.068: 90.3289% ( 316) 00:17:04.554 2.068 - 2.080: 91.7009% ( 204) 00:17:04.554 2.080 - 2.092: 93.3284% ( 242) 00:17:04.554 2.092 - 2.105: 94.9089% ( 235) 00:17:04.554 2.105 - 2.117: 96.1934% ( 191) 00:17:04.554 2.117 - 2.129: 97.1484% ( 142) 00:17:04.554 2.129 - 2.142: 97.7134% ( 84) 00:17:04.554 2.142 - 2.154: 98.1303% ( 62) 00:17:04.554 2.154 - 2.166: 98.3456% ( 32) 00:17:04.554 2.166 - 2.178: 98.4868% ( 21) 00:17:04.554 2.178 - 2.191: 98.5608% ( 11) 00:17:04.554 2.191 - 2.203: 98.6078% ( 7) 00:17:04.554 2.203 - 2.215: 98.6751% ( 10) 00:17:04.554 2.215 - 2.228: 98.7289% ( 8) 00:17:04.554 2.228 - 2.240: 98.7423% ( 2) 00:17:04.554 2.240 - 2.252: 98.7558% ( 2) 00:17:04.554 2.252 - 2.265: 98.7827% ( 4) 00:17:04.554 2.277 - 2.289: 98.7962% ( 2) 00:17:04.554 2.289 - 2.302: 98.8163% ( 3) 00:17:04.554 2.302 - 2.314: 98.8231% ( 1) 00:17:04.554 2.314 - 2.326: 98.8365% ( 2) 00:17:04.554 2.326 - 2.338: 98.8432% ( 1) 00:17:04.554 2.338 - 2.351: 98.8500% ( 1) 00:17:04.554 2.351 - 2.363: 98.8567% ( 1) 00:17:04.554 2.375 - 2.388: 98.8634% ( 1) 00:17:04.554 2.412 - 2.425: 98.8769% ( 2) 00:17:04.554 2.425 - 2.437: 98.8836% ( 1) 00:17:04.554 2.449 - 2.462: 98.8970% ( 2) 00:17:04.554 2.474 - 2.486: 98.9172% ( 3) 00:17:04.554 2.486 - 2.498: 98.9307% ( 2) 00:17:04.554 2.511 - 2.523: 98.9374% ( 1) 00:17:04.554 2.523 - 2.535: 98.9441% ( 1) 00:17:04.554 2.548 - 2.560: 98.9576% ( 2) 00:17:04.554 2.560 - 2.572: 98.9643% ( 1) 00:17:04.554 2.572 - 2.585: 98.9777% ( 2) 00:17:04.554 2.622 - 2.634: 98.9912% ( 2) 00:17:04.554 2.658 - 2.671: 99.0114% ( 3) 00:17:04.554 2.671 - 2.683: 99.0181% ( 1) 00:17:04.554 2.683 - 2.695: 99.0383% ( 3) 00:17:04.554 2.708 - 2.720: 99.0450% ( 1) 00:17:04.554 2.720 - 2.732: 99.0517% ( 1) 00:17:04.554 2.769 - 2.782: 99.0652% ( 2) 00:17:04.554 2.782 - 2.794: 99.0719% ( 1) 00:17:04.554 2.868 - 2.880: 99.0786% ( 1) 00:17:04.554 2.905 - 2.917: 99.0853% ( 1) 00:17:04.554 2.929 - 2.942: 99.0921% ( 1) 00:17:04.554 2.954 - 2.966: 99.1122% ( 3) 00:17:04.554 3.077 - 3.089: 99.1190% ( 1) 00:17:04.554 3.126 - 3.138: 99.1257% ( 1) 00:17:04.554 3.151 - 3.175: 99.1324% ( 1) 00:17:04.554 5.834 - 5.858: 99.1391% ( 1) 00:17:04.554 6.228 - 6.252: 99.1459% ( 1) 00:17:04.554 6.745 - 6.794: 99.1593% ( 2) 00:17:04.554 7.089 - 7.138: 99.1661% ( 1) 00:17:04.554 7.335 - 7.385: 99.1728% ( 1) 00:17:04.554 7.385 - 7.434: 99.1795% ( 1) 00:17:04.554 7.877 - 7.926: 99.1862% ( 1) 00:17:04.554 8.074 - 8.123: 9[2024-11-06 14:21:32.040363] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:04.554 9.1930% ( 1) 00:17:04.554 8.172 - 8.222: 99.1997% ( 1) 00:17:04.554 8.418 - 8.468: 99.2131% ( 2) 00:17:04.554 8.517 - 8.566: 99.2199% ( 1) 00:17:04.554 8.911 - 8.960: 99.2266% ( 1) 00:17:04.554 8.960 - 9.009: 99.2333% ( 1) 00:17:04.554 9.058 - 9.108: 99.2400% ( 1) 00:17:04.554 9.403 - 9.452: 99.2468% ( 1) 00:17:04.554 9.502 - 9.551: 99.2535% ( 1) 00:17:04.554 9.797 - 9.846: 99.2602% ( 1) 00:17:04.554 10.043 - 10.092: 99.2669% ( 1) 00:17:04.554 10.092 - 10.142: 99.2804% ( 2) 00:17:04.554 10.240 - 10.289: 99.2871% ( 1) 00:17:04.554 10.289 - 10.338: 99.2938% ( 1) 00:17:04.554 11.028 - 11.077: 99.3006% ( 1) 00:17:04.554 11.372 - 11.422: 99.3073% ( 1) 00:17:04.554 11.668 - 11.717: 99.3140% ( 1) 00:17:04.554 12.012 - 12.062: 99.3207% ( 1) 00:17:04.554 12.111 - 12.160: 99.3275% ( 1) 00:17:04.554 12.160 - 12.209: 99.3342% ( 1) 00:17:04.554 12.554 - 12.603: 99.3409% ( 1) 00:17:04.554 12.898 - 12.997: 99.3476% ( 1) 00:17:04.554 12.997 - 13.095: 99.3544% ( 1) 00:17:04.554 13.095 - 13.194: 99.3813% ( 4) 00:17:04.554 13.194 - 13.292: 99.3947% ( 2) 00:17:04.554 13.292 - 13.391: 99.4082% ( 2) 00:17:04.554 13.489 - 13.588: 99.4149% ( 1) 00:17:04.554 13.588 - 13.686: 99.4216% ( 1) 00:17:04.554 13.686 - 13.785: 99.4351% ( 2) 00:17:04.554 13.982 - 14.080: 99.4485% ( 2) 00:17:04.554 14.080 - 14.178: 99.4620% ( 2) 00:17:04.554 14.277 - 14.375: 99.4687% ( 1) 00:17:04.554 14.375 - 14.474: 99.4754% ( 1) 00:17:04.554 14.474 - 14.572: 99.4821% ( 1) 00:17:04.554 14.671 - 14.769: 99.4889% ( 1) 00:17:04.554 14.769 - 14.868: 99.4956% ( 1) 00:17:04.554 14.868 - 14.966: 99.5023% ( 1) 00:17:04.554 15.065 - 15.163: 99.5090% ( 1) 00:17:04.554 4007.778 - 4032.985: 100.0000% ( 73) 00:17:04.554 00:17:04.554 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:17:04.554 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:17:04.554 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:17:04.554 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:17:04.554 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:04.811 [ 00:17:04.811 { 00:17:04.811 "allow_any_host": true, 00:17:04.812 "hosts": [], 00:17:04.812 "listen_addresses": [], 00:17:04.812 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:04.812 "subtype": "Discovery" 00:17:04.812 }, 00:17:04.812 { 00:17:04.812 "allow_any_host": true, 00:17:04.812 "hosts": [], 00:17:04.812 "listen_addresses": [ 00:17:04.812 { 00:17:04.812 "adrfam": "IPv4", 00:17:04.812 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:04.812 "trsvcid": "0", 00:17:04.812 "trtype": "VFIOUSER" 00:17:04.812 } 00:17:04.812 ], 00:17:04.812 "max_cntlid": 65519, 00:17:04.812 "max_namespaces": 32, 00:17:04.812 "min_cntlid": 1, 00:17:04.812 "model_number": "SPDK bdev Controller", 00:17:04.812 "namespaces": [ 00:17:04.812 { 00:17:04.812 "bdev_name": "Malloc1", 00:17:04.812 "name": "Malloc1", 00:17:04.812 "nguid": "F77A7E7EE7FB4770A8500ED8555A12D4", 00:17:04.812 "nsid": 1, 00:17:04.812 "uuid": "f77a7e7e-e7fb-4770-a850-0ed8555a12d4" 00:17:04.812 }, 00:17:04.812 { 00:17:04.812 "bdev_name": "Malloc3", 00:17:04.812 "name": "Malloc3", 00:17:04.812 "nguid": "5483B249A08A4B47BF0D8D3C4FE4AF2C", 00:17:04.812 "nsid": 2, 00:17:04.812 "uuid": "5483b249-a08a-4b47-bf0d-8d3c4fe4af2c" 00:17:04.812 } 00:17:04.812 ], 00:17:04.812 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:04.812 "serial_number": "SPDK1", 00:17:04.812 "subtype": "NVMe" 00:17:04.812 }, 00:17:04.812 { 00:17:04.812 "allow_any_host": true, 00:17:04.812 "hosts": [], 00:17:04.812 "listen_addresses": [ 00:17:04.812 { 00:17:04.812 "adrfam": "IPv4", 00:17:04.812 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:04.812 "trsvcid": "0", 00:17:04.812 "trtype": "VFIOUSER" 00:17:04.812 } 00:17:04.812 ], 00:17:04.812 "max_cntlid": 65519, 00:17:04.812 "max_namespaces": 32, 00:17:04.812 "min_cntlid": 1, 00:17:04.812 "model_number": "SPDK bdev Controller", 00:17:04.812 "namespaces": [ 00:17:04.812 { 00:17:04.812 "bdev_name": "Malloc2", 00:17:04.812 "name": "Malloc2", 00:17:04.812 "nguid": "285976939BA447DBA4FF22D423E12392", 00:17:04.812 "nsid": 1, 00:17:04.812 "uuid": "28597693-9ba4-47db-a4ff-22d423e12392" 00:17:04.812 } 00:17:04.812 ], 00:17:04.812 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:04.812 "serial_number": "SPDK2", 00:17:04.812 "subtype": "NVMe" 00:17:04.812 } 00:17:04.812 ] 00:17:04.812 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:04.812 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=81523 00:17:04.812 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:17:04.812 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # local i=0 00:17:04.812 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:04.812 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # '[' 0 -lt 200 ']' 00:17:04.812 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # i=1 00:17:04.812 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # sleep 0.1 00:17:04.812 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:17:04.812 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:04.812 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # '[' 1 -lt 200 ']' 00:17:04.812 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # i=2 00:17:04.812 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # sleep 0.1 00:17:05.069 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:05.069 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # '[' 2 -lt 200 ']' 00:17:05.069 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # i=3 00:17:05.069 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # sleep 0.1 00:17:05.069 [2024-11-06 14:21:32.576704] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:05.069 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:05.069 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:05.069 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1278 -- # return 0 00:17:05.069 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:17:05.069 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:17:05.327 Malloc4 00:17:05.327 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:17:05.585 [2024-11-06 14:21:33.141059] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:05.585 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:05.585 Asynchronous Event Request test 00:17:05.585 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:05.585 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:05.585 Registering asynchronous event callbacks... 00:17:05.585 Starting namespace attribute notice tests for all controllers... 00:17:05.585 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:05.585 aer_cb - Changed Namespace 00:17:05.585 Cleaning up... 00:17:05.843 [ 00:17:05.843 { 00:17:05.843 "allow_any_host": true, 00:17:05.843 "hosts": [], 00:17:05.843 "listen_addresses": [], 00:17:05.843 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:05.843 "subtype": "Discovery" 00:17:05.843 }, 00:17:05.843 { 00:17:05.843 "allow_any_host": true, 00:17:05.843 "hosts": [], 00:17:05.843 "listen_addresses": [ 00:17:05.843 { 00:17:05.843 "adrfam": "IPv4", 00:17:05.843 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:05.843 "trsvcid": "0", 00:17:05.843 "trtype": "VFIOUSER" 00:17:05.843 } 00:17:05.843 ], 00:17:05.843 "max_cntlid": 65519, 00:17:05.843 "max_namespaces": 32, 00:17:05.843 "min_cntlid": 1, 00:17:05.843 "model_number": "SPDK bdev Controller", 00:17:05.843 "namespaces": [ 00:17:05.843 { 00:17:05.843 "bdev_name": "Malloc1", 00:17:05.843 "name": "Malloc1", 00:17:05.843 "nguid": "F77A7E7EE7FB4770A8500ED8555A12D4", 00:17:05.843 "nsid": 1, 00:17:05.843 "uuid": "f77a7e7e-e7fb-4770-a850-0ed8555a12d4" 00:17:05.843 }, 00:17:05.843 { 00:17:05.843 "bdev_name": "Malloc3", 00:17:05.843 "name": "Malloc3", 00:17:05.843 "nguid": "5483B249A08A4B47BF0D8D3C4FE4AF2C", 00:17:05.843 "nsid": 2, 00:17:05.843 "uuid": "5483b249-a08a-4b47-bf0d-8d3c4fe4af2c" 00:17:05.843 } 00:17:05.843 ], 00:17:05.843 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:05.843 "serial_number": "SPDK1", 00:17:05.843 "subtype": "NVMe" 00:17:05.843 }, 00:17:05.843 { 00:17:05.843 "allow_any_host": true, 00:17:05.843 "hosts": [], 00:17:05.843 "listen_addresses": [ 00:17:05.843 { 00:17:05.843 "adrfam": "IPv4", 00:17:05.843 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:05.843 "trsvcid": "0", 00:17:05.843 "trtype": "VFIOUSER" 00:17:05.843 } 00:17:05.843 ], 00:17:05.843 "max_cntlid": 65519, 00:17:05.843 "max_namespaces": 32, 00:17:05.843 "min_cntlid": 1, 00:17:05.843 "model_number": "SPDK bdev Controller", 00:17:05.843 "namespaces": [ 00:17:05.843 { 00:17:05.843 "bdev_name": "Malloc2", 00:17:05.843 "name": "Malloc2", 00:17:05.843 "nguid": "285976939BA447DBA4FF22D423E12392", 00:17:05.843 "nsid": 1, 00:17:05.843 "uuid": "28597693-9ba4-47db-a4ff-22d423e12392" 00:17:05.843 }, 00:17:05.843 { 00:17:05.843 "bdev_name": "Malloc4", 00:17:05.843 "name": "Malloc4", 00:17:05.843 "nguid": "B6723B7532FA4709848C6EB26472DB75", 00:17:05.843 "nsid": 2, 00:17:05.843 "uuid": "b6723b75-32fa-4709-848c-6eb26472db75" 00:17:05.843 } 00:17:05.843 ], 00:17:05.843 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:05.843 "serial_number": "SPDK2", 00:17:05.843 "subtype": "NVMe" 00:17:05.843 } 00:17:05.843 ] 00:17:05.843 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 81523 00:17:05.843 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:17:05.843 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 80813 00:17:05.843 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' -z 80813 ']' 00:17:05.843 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # kill -0 80813 00:17:05.843 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # uname 00:17:05.843 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:05.844 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80813 00:17:05.844 killing process with pid 80813 00:17:05.844 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:05.844 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:05.844 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80813' 00:17:05.844 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@971 -- # kill 80813 00:17:05.844 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@976 -- # wait 80813 00:17:06.778 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:17:06.778 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:06.778 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:17:06.778 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:17:06.778 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:17:06.778 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=81578 00:17:06.778 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 81578' 00:17:06.778 Process pid: 81578 00:17:06.778 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:06.778 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 81578 00:17:06.778 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # '[' -z 81578 ']' 00:17:06.778 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:06.778 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:06.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:06.778 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:06.778 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:06.778 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:06.778 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:17:07.036 [2024-11-06 14:21:34.441154] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:17:07.036 [2024-11-06 14:21:34.442848] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:17:07.036 [2024-11-06 14:21:34.442921] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:07.036 [2024-11-06 14:21:34.616356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:07.294 [2024-11-06 14:21:34.717629] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:07.294 [2024-11-06 14:21:34.717686] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:07.294 [2024-11-06 14:21:34.717695] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:07.294 [2024-11-06 14:21:34.717703] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:07.294 [2024-11-06 14:21:34.717709] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:07.294 [2024-11-06 14:21:34.719369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:07.294 [2024-11-06 14:21:34.719590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:07.294 [2024-11-06 14:21:34.719794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:07.294 [2024-11-06 14:21:34.719815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:07.294 [2024-11-06 14:21:34.917357] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:17:07.294 [2024-11-06 14:21:34.917632] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:17:07.294 [2024-11-06 14:21:34.917694] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:17:07.294 [2024-11-06 14:21:34.918192] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:17:07.294 [2024-11-06 14:21:34.918473] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:17:07.858 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:07.858 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@866 -- # return 0 00:17:07.858 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:17:08.790 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:17:09.047 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:17:09.047 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:17:09.047 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:09.047 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:17:09.047 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:09.304 Malloc1 00:17:09.304 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:17:09.561 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:17:09.561 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:17:09.818 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:09.818 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:17:09.818 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:10.076 Malloc2 00:17:10.076 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:17:10.333 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:17:10.590 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:17:10.847 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:17:10.847 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 81578 00:17:10.847 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' -z 81578 ']' 00:17:10.847 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # kill -0 81578 00:17:10.847 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # uname 00:17:10.847 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:10.847 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81578 00:17:10.847 killing process with pid 81578 00:17:10.847 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:10.847 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:10.847 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81578' 00:17:10.847 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@971 -- # kill 81578 00:17:10.847 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@976 -- # wait 81578 00:17:11.809 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:17:11.809 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:11.809 00:17:11.809 real 0m56.050s 00:17:11.809 user 3m33.041s 00:17:11.809 sys 0m3.167s 00:17:11.809 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:11.809 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:11.809 ************************************ 00:17:11.809 END TEST nvmf_vfio_user 00:17:11.809 ************************************ 00:17:11.809 14:21:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /home/vagrant/spdk_repo/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:17:11.809 14:21:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:17:11.809 14:21:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:11.809 14:21:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:11.809 ************************************ 00:17:11.809 START TEST nvmf_vfio_user_nvme_compliance 00:17:11.809 ************************************ 00:17:11.809 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:17:11.809 * Looking for test storage... 00:17:11.809 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/compliance 00:17:11.809 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:11.809 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lcov --version 00:17:11.809 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:11.809 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:11.809 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:11.809 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:11.809 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:11.809 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:17:11.809 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:17:11.809 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:17:11.809 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:17:11.809 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:17:11.809 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:17:11.809 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:17:11.809 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:11.809 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:17:11.809 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:17:11.809 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:11.809 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:11.809 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:17:11.809 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:17:11.809 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:11.809 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:17:11.809 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:17:11.809 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:17:11.809 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:17:11.809 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:11.809 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:17:11.809 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:17:11.809 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:11.809 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:11.809 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:17:11.809 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:11.809 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:11.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:11.810 --rc genhtml_branch_coverage=1 00:17:11.810 --rc genhtml_function_coverage=1 00:17:11.810 --rc genhtml_legend=1 00:17:11.810 --rc geninfo_all_blocks=1 00:17:11.810 --rc geninfo_unexecuted_blocks=1 00:17:11.810 00:17:11.810 ' 00:17:11.810 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:11.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:11.810 --rc genhtml_branch_coverage=1 00:17:11.810 --rc genhtml_function_coverage=1 00:17:11.810 --rc genhtml_legend=1 00:17:11.810 --rc geninfo_all_blocks=1 00:17:11.810 --rc geninfo_unexecuted_blocks=1 00:17:11.810 00:17:11.810 ' 00:17:11.810 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:11.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:11.810 --rc genhtml_branch_coverage=1 00:17:11.810 --rc genhtml_function_coverage=1 00:17:11.810 --rc genhtml_legend=1 00:17:11.810 --rc geninfo_all_blocks=1 00:17:11.810 --rc geninfo_unexecuted_blocks=1 00:17:11.810 00:17:11.810 ' 00:17:11.810 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:11.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:11.810 --rc genhtml_branch_coverage=1 00:17:11.810 --rc genhtml_function_coverage=1 00:17:11.810 --rc genhtml_legend=1 00:17:11.810 --rc geninfo_all_blocks=1 00:17:11.810 --rc geninfo_unexecuted_blocks=1 00:17:11.810 00:17:11.810 ' 00:17:11.810 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:11.810 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:17:11.810 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:11.810 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:11.810 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:11.810 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:11.810 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:11.810 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:11.810 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:11.810 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:11.810 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:11.810 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:11.810 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:17:11.810 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:17:11.810 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:11.810 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:11.810 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:11.810 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:11.810 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:11.810 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:17:11.810 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:11.810 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:11.810 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:11.810 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.810 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.810 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.810 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:17:11.810 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.810 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:17:11.810 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:11.810 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:11.810 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:11.810 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:11.810 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:11.810 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:11.810 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:11.810 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:11.810 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:11.810 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:11.810 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:11.810 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:11.810 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:17:11.810 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:17:11.810 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:17:11.810 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=81779 00:17:11.810 Process pid: 81779 00:17:11.810 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 81779' 00:17:11.810 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:11.810 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 81779 00:17:11.810 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # '[' -z 81779 ']' 00:17:11.811 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:11.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:11.811 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:11.811 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:11.811 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:17:11.811 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:11.811 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:11.811 [2024-11-06 14:21:39.380728] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:17:11.811 [2024-11-06 14:21:39.380829] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:12.069 [2024-11-06 14:21:39.536305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:12.069 [2024-11-06 14:21:39.653431] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:12.069 [2024-11-06 14:21:39.653492] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:12.069 [2024-11-06 14:21:39.653505] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:12.069 [2024-11-06 14:21:39.653513] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:12.069 [2024-11-06 14:21:39.653524] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:12.069 [2024-11-06 14:21:39.655191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:12.069 [2024-11-06 14:21:39.655348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:12.069 [2024-11-06 14:21:39.655370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:12.634 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:12.634 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@866 -- # return 0 00:17:12.634 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:17:13.575 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:17:13.575 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:17:13.575 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:17:13.575 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.575 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:13.832 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.832 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:17:13.832 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:17:13.832 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.832 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:13.832 malloc0 00:17:13.832 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.832 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:17:13.832 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.832 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:13.832 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.832 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:17:13.832 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.832 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:13.832 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.832 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:17:13.832 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.832 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:13.832 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.832 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:17:14.090 00:17:14.090 00:17:14.090 CUnit - A unit testing framework for C - Version 2.1-3 00:17:14.090 http://cunit.sourceforge.net/ 00:17:14.090 00:17:14.090 00:17:14.090 Suite: nvme_compliance 00:17:14.090 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-06 14:21:41.554602] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:14.090 [2024-11-06 14:21:41.556104] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:17:14.090 [2024-11-06 14:21:41.556134] vfio_user.c:5507:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:17:14.090 [2024-11-06 14:21:41.556152] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:17:14.090 [2024-11-06 14:21:41.559627] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:14.090 passed 00:17:14.090 Test: admin_identify_ctrlr_verify_fused ...[2024-11-06 14:21:41.656396] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:14.090 [2024-11-06 14:21:41.659420] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:14.090 passed 00:17:14.349 Test: admin_identify_ns ...[2024-11-06 14:21:41.753484] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:14.349 [2024-11-06 14:21:41.813987] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:17:14.349 [2024-11-06 14:21:41.821972] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:17:14.349 [2024-11-06 14:21:41.843095] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:14.349 passed 00:17:14.349 Test: admin_get_features_mandatory_features ...[2024-11-06 14:21:41.935574] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:14.349 [2024-11-06 14:21:41.938592] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:14.607 passed 00:17:14.607 Test: admin_get_features_optional_features ...[2024-11-06 14:21:42.035363] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:14.607 [2024-11-06 14:21:42.038398] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:14.607 passed 00:17:14.607 Test: admin_set_features_number_of_queues ...[2024-11-06 14:21:42.131461] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:14.607 [2024-11-06 14:21:42.240513] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:14.864 passed 00:17:14.864 Test: admin_get_log_page_mandatory_logs ...[2024-11-06 14:21:42.335977] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:14.864 [2024-11-06 14:21:42.339002] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:14.864 passed 00:17:14.864 Test: admin_get_log_page_with_lpo ...[2024-11-06 14:21:42.435094] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:15.122 [2024-11-06 14:21:42.505004] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:17:15.122 [2024-11-06 14:21:42.518055] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:15.122 passed 00:17:15.122 Test: fabric_property_get ...[2024-11-06 14:21:42.609477] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:15.122 [2024-11-06 14:21:42.610810] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:17:15.122 [2024-11-06 14:21:42.612501] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:15.122 passed 00:17:15.122 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-06 14:21:42.709231] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:15.122 [2024-11-06 14:21:42.710614] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:17:15.122 [2024-11-06 14:21:42.712259] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:15.380 passed 00:17:15.380 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-06 14:21:42.805486] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:15.380 [2024-11-06 14:21:42.889964] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:15.380 [2024-11-06 14:21:42.905971] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:15.380 [2024-11-06 14:21:42.911523] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:15.380 passed 00:17:15.380 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-06 14:21:43.009897] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:15.380 [2024-11-06 14:21:43.011290] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:17:15.380 [2024-11-06 14:21:43.012928] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:15.639 passed 00:17:15.639 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-06 14:21:43.107263] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:15.639 [2024-11-06 14:21:43.184978] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:17:15.639 [2024-11-06 14:21:43.208971] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:15.639 [2024-11-06 14:21:43.214520] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:15.639 passed 00:17:15.898 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-06 14:21:43.308960] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:15.898 [2024-11-06 14:21:43.310401] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:17:15.898 [2024-11-06 14:21:43.310452] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:17:15.898 [2024-11-06 14:21:43.311986] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:15.898 passed 00:17:15.898 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-06 14:21:43.406392] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:15.898 [2024-11-06 14:21:43.499970] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:17:15.898 [2024-11-06 14:21:43.507969] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:17:15.898 [2024-11-06 14:21:43.515965] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:17:15.898 [2024-11-06 14:21:43.523969] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:17:16.156 [2024-11-06 14:21:43.553559] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:16.156 passed 00:17:16.156 Test: admin_create_io_sq_verify_pc ...[2024-11-06 14:21:43.646165] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:16.156 [2024-11-06 14:21:43.663994] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:17:16.156 [2024-11-06 14:21:43.682136] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:16.156 passed 00:17:16.156 Test: admin_create_io_qp_max_qps ...[2024-11-06 14:21:43.778019] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:17.531 [2024-11-06 14:21:44.879970] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:17:17.788 [2024-11-06 14:21:45.305646] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:17.788 passed 00:17:17.788 Test: admin_create_io_sq_shared_cq ...[2024-11-06 14:21:45.399099] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:18.046 [2024-11-06 14:21:45.531964] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:17:18.046 [2024-11-06 14:21:45.569081] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:18.046 passed 00:17:18.046 00:17:18.046 Run Summary: Type Total Ran Passed Failed Inactive 00:17:18.046 suites 1 1 n/a 0 0 00:17:18.046 tests 18 18 18 0 0 00:17:18.046 asserts 360 360 360 0 n/a 00:17:18.046 00:17:18.046 Elapsed time = 1.690 seconds 00:17:18.046 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 81779 00:17:18.046 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # '[' -z 81779 ']' 00:17:18.046 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # kill -0 81779 00:17:18.046 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@957 -- # uname 00:17:18.046 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:18.046 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81779 00:17:18.304 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:18.304 killing process with pid 81779 00:17:18.304 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:18.304 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81779' 00:17:18.304 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@971 -- # kill 81779 00:17:18.304 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@976 -- # wait 81779 00:17:18.871 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:17:18.871 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:17:18.871 00:17:18.871 real 0m7.224s 00:17:18.871 user 0m19.975s 00:17:18.871 sys 0m0.585s 00:17:18.871 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:18.871 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:18.871 ************************************ 00:17:18.871 END TEST nvmf_vfio_user_nvme_compliance 00:17:18.871 ************************************ 00:17:18.871 14:21:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:18.871 14:21:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:17:18.871 14:21:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:18.871 14:21:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:18.871 ************************************ 00:17:18.871 START TEST nvmf_vfio_user_fuzz 00:17:18.871 ************************************ 00:17:18.871 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:18.871 * Looking for test storage... 00:17:18.871 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:18.871 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:18.871 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:17:18.871 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:19.130 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:19.130 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:19.130 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:19.130 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:19.130 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:17:19.130 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:17:19.130 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:17:19.130 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:17:19.130 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:17:19.130 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:17:19.130 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:17:19.130 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:19.130 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:17:19.130 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:17:19.130 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:19.130 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:19.131 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:17:19.131 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:17:19.131 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:19.131 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:17:19.131 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:17:19.131 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:17:19.131 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:17:19.131 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:19.131 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:17:19.131 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:17:19.131 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:19.131 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:19.131 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:17:19.131 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:19.131 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:19.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:19.131 --rc genhtml_branch_coverage=1 00:17:19.131 --rc genhtml_function_coverage=1 00:17:19.131 --rc genhtml_legend=1 00:17:19.131 --rc geninfo_all_blocks=1 00:17:19.131 --rc geninfo_unexecuted_blocks=1 00:17:19.131 00:17:19.131 ' 00:17:19.131 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:19.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:19.131 --rc genhtml_branch_coverage=1 00:17:19.131 --rc genhtml_function_coverage=1 00:17:19.131 --rc genhtml_legend=1 00:17:19.131 --rc geninfo_all_blocks=1 00:17:19.131 --rc geninfo_unexecuted_blocks=1 00:17:19.131 00:17:19.131 ' 00:17:19.131 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:19.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:19.131 --rc genhtml_branch_coverage=1 00:17:19.131 --rc genhtml_function_coverage=1 00:17:19.131 --rc genhtml_legend=1 00:17:19.131 --rc geninfo_all_blocks=1 00:17:19.131 --rc geninfo_unexecuted_blocks=1 00:17:19.131 00:17:19.131 ' 00:17:19.131 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:19.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:19.131 --rc genhtml_branch_coverage=1 00:17:19.131 --rc genhtml_function_coverage=1 00:17:19.131 --rc genhtml_legend=1 00:17:19.131 --rc geninfo_all_blocks=1 00:17:19.131 --rc geninfo_unexecuted_blocks=1 00:17:19.131 00:17:19.131 ' 00:17:19.131 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:19.131 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:17:19.131 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:19.131 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:19.131 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:19.131 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:19.131 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:19.131 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:19.131 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:19.131 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:19.131 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:19.131 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:19.131 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:17:19.131 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:17:19.131 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:19.131 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:19.131 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:19.131 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:19.131 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:19.131 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:17:19.131 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:19.131 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:19.131 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:19.131 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.131 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.131 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.131 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:17:19.131 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.131 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:17:19.131 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:19.131 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:19.131 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:19.131 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:19.131 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:19.131 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:19.131 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:19.131 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:19.131 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:19.131 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:19.131 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:19.131 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:19.131 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:17:19.131 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:17:19.131 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:17:19.131 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:17:19.131 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:17:19.131 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=81950 00:17:19.131 Process pid: 81950 00:17:19.131 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 81950' 00:17:19.131 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:19.131 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 81950 00:17:19.131 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # '[' -z 81950 ']' 00:17:19.131 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:19.131 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:19.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:19.131 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:19.131 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:19.131 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:19.132 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:20.064 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:20.064 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@866 -- # return 0 00:17:20.064 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:17:20.999 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:17:20.999 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.999 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:20.999 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.999 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:17:20.999 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:17:20.999 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.999 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:20.999 malloc0 00:17:20.999 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.999 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:17:20.999 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.999 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:20.999 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.999 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:17:20.999 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.999 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:20.999 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.999 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:17:20.999 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.999 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:20.999 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.999 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:17:20.999 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:17:21.564 Shutting down the fuzz application 00:17:21.564 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:17:21.564 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.564 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:21.564 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.564 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 81950 00:17:21.564 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # '[' -z 81950 ']' 00:17:21.564 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # kill -0 81950 00:17:21.564 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@957 -- # uname 00:17:21.564 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:21.564 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81950 00:17:21.564 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:21.564 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:21.564 killing process with pid 81950 00:17:21.564 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81950' 00:17:21.565 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@971 -- # kill 81950 00:17:21.565 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@976 -- # wait 81950 00:17:22.131 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /home/vagrant/spdk_repo/spdk/../output/vfio_user_fuzz_log.txt /home/vagrant/spdk_repo/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:17:22.131 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:17:22.131 00:17:22.131 real 0m3.309s 00:17:22.131 user 0m3.599s 00:17:22.131 sys 0m0.408s 00:17:22.131 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:22.131 ************************************ 00:17:22.131 END TEST nvmf_vfio_user_fuzz 00:17:22.131 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:22.131 ************************************ 00:17:22.131 14:21:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:22.131 14:21:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:17:22.131 14:21:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:22.131 14:21:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:22.131 ************************************ 00:17:22.131 START TEST nvmf_auth_target 00:17:22.131 ************************************ 00:17:22.131 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:22.390 * Looking for test storage... 00:17:22.390 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:22.390 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:22.390 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lcov --version 00:17:22.390 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:22.390 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:22.390 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:22.390 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:22.390 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:22.390 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:17:22.390 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:17:22.390 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:17:22.390 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:17:22.390 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:17:22.390 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:17:22.390 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:17:22.390 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:22.390 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:17:22.390 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:17:22.390 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:22.390 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:22.390 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:17:22.390 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:17:22.390 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:22.390 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:17:22.390 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:17:22.390 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:17:22.390 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:17:22.390 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:22.390 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:17:22.390 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:17:22.390 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:22.390 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:22.390 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:17:22.390 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:22.390 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:22.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.390 --rc genhtml_branch_coverage=1 00:17:22.390 --rc genhtml_function_coverage=1 00:17:22.390 --rc genhtml_legend=1 00:17:22.390 --rc geninfo_all_blocks=1 00:17:22.390 --rc geninfo_unexecuted_blocks=1 00:17:22.390 00:17:22.390 ' 00:17:22.390 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:22.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.390 --rc genhtml_branch_coverage=1 00:17:22.390 --rc genhtml_function_coverage=1 00:17:22.390 --rc genhtml_legend=1 00:17:22.390 --rc geninfo_all_blocks=1 00:17:22.390 --rc geninfo_unexecuted_blocks=1 00:17:22.390 00:17:22.390 ' 00:17:22.390 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:22.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.390 --rc genhtml_branch_coverage=1 00:17:22.390 --rc genhtml_function_coverage=1 00:17:22.390 --rc genhtml_legend=1 00:17:22.390 --rc geninfo_all_blocks=1 00:17:22.390 --rc geninfo_unexecuted_blocks=1 00:17:22.390 00:17:22.390 ' 00:17:22.390 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:22.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.390 --rc genhtml_branch_coverage=1 00:17:22.390 --rc genhtml_function_coverage=1 00:17:22.390 --rc genhtml_legend=1 00:17:22.390 --rc geninfo_all_blocks=1 00:17:22.390 --rc geninfo_unexecuted_blocks=1 00:17:22.390 00:17:22.390 ' 00:17:22.390 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:22.390 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:17:22.390 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:22.390 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:22.390 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:22.390 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:22.390 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:22.390 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:22.390 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:22.390 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:22.390 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:22.390 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:22.390 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:17:22.390 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:17:22.390 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:22.390 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:22.390 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:22.390 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:22.390 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:22.390 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:17:22.391 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:22.391 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:22.391 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:22.391 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.391 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.391 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.391 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:17:22.391 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.391 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:17:22.391 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:22.391 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:22.391 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:22.391 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:22.391 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:22.391 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:22.391 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:22.391 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:22.391 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:22.391 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:22.391 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:22.391 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:22.391 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:17:22.391 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:17:22.391 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:17:22.391 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:17:22.391 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:17:22.391 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:17:22.391 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:22.391 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:22.391 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:22.391 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:22.391 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:22.391 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:22.391 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:22.391 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:22.391 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:22.391 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:22.391 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:22.391 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:22.391 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:22.391 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:22.391 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:22.391 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:22.391 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:22.391 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:22.391 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:22.391 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:22.391 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:22.391 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:22.391 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:22.391 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:22.391 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:22.391 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:22.391 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:22.391 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:22.391 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:22.391 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:22.391 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:22.391 Cannot find device "nvmf_init_br" 00:17:22.391 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:17:22.391 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:22.391 Cannot find device "nvmf_init_br2" 00:17:22.391 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:17:22.391 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:22.391 Cannot find device "nvmf_tgt_br" 00:17:22.391 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:17:22.391 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:22.391 Cannot find device "nvmf_tgt_br2" 00:17:22.391 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:17:22.391 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:22.391 Cannot find device "nvmf_init_br" 00:17:22.391 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:17:22.391 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:22.391 Cannot find device "nvmf_init_br2" 00:17:22.391 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:17:22.391 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:22.391 Cannot find device "nvmf_tgt_br" 00:17:22.391 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:17:22.391 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:22.391 Cannot find device "nvmf_tgt_br2" 00:17:22.391 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:17:22.391 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:22.391 Cannot find device "nvmf_br" 00:17:22.391 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:17:22.391 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:22.391 Cannot find device "nvmf_init_if" 00:17:22.391 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:17:22.391 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:22.391 Cannot find device "nvmf_init_if2" 00:17:22.391 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:17:22.391 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:22.391 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:22.391 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:17:22.391 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:22.391 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:22.391 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:17:22.391 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:22.391 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:22.650 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:22.650 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:22.650 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:22.650 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:22.650 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:22.650 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:22.650 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:22.650 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:22.650 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:22.650 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:22.650 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:22.650 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:22.650 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:22.650 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:22.650 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:22.650 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:22.650 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:22.650 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:22.650 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:22.650 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:22.650 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:22.650 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:22.650 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:22.650 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:22.650 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:22.650 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:22.650 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:22.650 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:22.650 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:22.650 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:22.650 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:22.650 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:22.650 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.124 ms 00:17:22.650 00:17:22.650 --- 10.0.0.3 ping statistics --- 00:17:22.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:22.650 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:17:22.650 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:22.650 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:22.650 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.061 ms 00:17:22.650 00:17:22.650 --- 10.0.0.4 ping statistics --- 00:17:22.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:22.650 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:17:22.650 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:22.650 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:22.650 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:17:22.650 00:17:22.650 --- 10.0.0.1 ping statistics --- 00:17:22.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:22.650 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:17:22.650 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:22.650 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:22.650 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:17:22.650 00:17:22.650 --- 10.0.0.2 ping statistics --- 00:17:22.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:22.650 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:17:22.650 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:22.650 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@461 -- # return 0 00:17:22.650 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:22.650 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:22.650 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:22.650 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:22.650 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:22.650 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:22.650 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:22.650 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:17:22.651 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:22.651 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:22.651 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.651 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=82215 00:17:22.651 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:17:22.651 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 82215 00:17:22.651 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 82215 ']' 00:17:22.651 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:22.651 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:22.651 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:22.651 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:22.651 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.586 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:23.586 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:17:23.586 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:23.586 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:23.586 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.586 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:23.586 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=82259 00:17:23.586 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:17:23.586 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:23.586 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:17:23.586 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:23.586 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:23.586 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:23.586 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:17:23.586 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:17:23.586 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:23.586 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d07a3e5f5121455f5245b6373ac2b898877602449bce50bb 00:17:23.586 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:17:23.586 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.ben 00:17:23.586 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d07a3e5f5121455f5245b6373ac2b898877602449bce50bb 0 00:17:23.586 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d07a3e5f5121455f5245b6373ac2b898877602449bce50bb 0 00:17:23.586 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:23.586 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:23.586 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d07a3e5f5121455f5245b6373ac2b898877602449bce50bb 00:17:23.586 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:17:23.586 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:23.586 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.ben 00:17:23.586 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.ben 00:17:23.586 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.ben 00:17:23.586 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:17:23.586 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:23.586 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:23.586 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:23.586 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:17:23.586 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:17:23.586 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:23.586 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=e00dab9149c187da9f12160a8d3ad6db12176248349688e129cae515d69f4487 00:17:23.586 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:17:23.586 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.ggZ 00:17:23.586 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key e00dab9149c187da9f12160a8d3ad6db12176248349688e129cae515d69f4487 3 00:17:23.586 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 e00dab9149c187da9f12160a8d3ad6db12176248349688e129cae515d69f4487 3 00:17:23.586 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:23.586 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:23.586 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=e00dab9149c187da9f12160a8d3ad6db12176248349688e129cae515d69f4487 00:17:23.586 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:17:23.586 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:23.586 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.ggZ 00:17:23.586 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.ggZ 00:17:23.586 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.ggZ 00:17:23.586 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:17:23.587 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:23.587 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:23.587 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:23.587 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:17:23.587 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:17:23.587 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:23.587 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=e82d91878e79d592692eb1b2fff4ba67 00:17:23.587 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Bc8 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key e82d91878e79d592692eb1b2fff4ba67 1 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 e82d91878e79d592692eb1b2fff4ba67 1 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=e82d91878e79d592692eb1b2fff4ba67 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Bc8 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Bc8 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.Bc8 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=81f7989fc9668fad512146f510f977f54ddf61bbc5ac2040 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.eRo 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 81f7989fc9668fad512146f510f977f54ddf61bbc5ac2040 2 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 81f7989fc9668fad512146f510f977f54ddf61bbc5ac2040 2 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=81f7989fc9668fad512146f510f977f54ddf61bbc5ac2040 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.eRo 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.eRo 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.eRo 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=81b7b95cfb80d3fc9bc2da587fa7b731978eb51221e406f9 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.KdJ 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 81b7b95cfb80d3fc9bc2da587fa7b731978eb51221e406f9 2 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 81b7b95cfb80d3fc9bc2da587fa7b731978eb51221e406f9 2 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=81b7b95cfb80d3fc9bc2da587fa7b731978eb51221e406f9 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.KdJ 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.KdJ 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.KdJ 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ffa27095073a1b50914db2d7545e3aa1 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.2DT 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ffa27095073a1b50914db2d7545e3aa1 1 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ffa27095073a1b50914db2d7545e3aa1 1 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ffa27095073a1b50914db2d7545e3aa1 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.2DT 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.2DT 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.2DT 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=fc7ab508e0f6f60411b724b3263691fb1269ba74ebc60f4265d4fd6075985f91 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.GhX 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key fc7ab508e0f6f60411b724b3263691fb1269ba74ebc60f4265d4fd6075985f91 3 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 fc7ab508e0f6f60411b724b3263691fb1269ba74ebc60f4265d4fd6075985f91 3 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=fc7ab508e0f6f60411b724b3263691fb1269ba74ebc60f4265d4fd6075985f91 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.GhX 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.GhX 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.GhX 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 82215 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 82215 ']' 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:23.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:23.846 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:23.847 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:23.847 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.109 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:24.109 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:17:24.109 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 82259 /var/tmp/host.sock 00:17:24.109 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 82259 ']' 00:17:24.109 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:17:24.109 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:24.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:24.109 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:24.109 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:24.109 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.675 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:24.675 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:17:24.675 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:17:24.675 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.675 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.675 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.675 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:24.675 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.ben 00:17:24.675 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.675 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.675 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.675 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.ben 00:17:24.675 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.ben 00:17:24.675 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.ggZ ]] 00:17:24.675 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ggZ 00:17:24.675 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.675 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.675 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.675 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ggZ 00:17:24.675 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ggZ 00:17:24.933 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:24.933 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Bc8 00:17:24.933 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.933 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.933 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.933 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.Bc8 00:17:24.933 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.Bc8 00:17:25.190 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.eRo ]] 00:17:25.190 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.eRo 00:17:25.190 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.190 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.190 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.190 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.eRo 00:17:25.190 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.eRo 00:17:25.447 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:25.447 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.KdJ 00:17:25.447 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.447 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.447 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.447 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.KdJ 00:17:25.447 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.KdJ 00:17:25.705 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.2DT ]] 00:17:25.705 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.2DT 00:17:25.705 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.705 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.705 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.705 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.2DT 00:17:25.705 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.2DT 00:17:25.962 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:25.962 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.GhX 00:17:25.962 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.962 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.962 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.962 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.GhX 00:17:25.962 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.GhX 00:17:25.962 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:17:25.962 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:25.962 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:25.962 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:25.962 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:25.962 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:26.235 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:17:26.235 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:26.235 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:26.235 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:26.235 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:26.235 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.235 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:26.235 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.235 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.235 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.235 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:26.235 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:26.235 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:26.492 00:17:26.492 14:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:26.492 14:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:26.492 14:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.751 14:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.751 14:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.751 14:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.751 14:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.751 14:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.751 14:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:26.751 { 00:17:26.751 "auth": { 00:17:26.751 "dhgroup": "null", 00:17:26.751 "digest": "sha256", 00:17:26.751 "state": "completed" 00:17:26.751 }, 00:17:26.751 "cntlid": 1, 00:17:26.751 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:17:26.751 "listen_address": { 00:17:26.751 "adrfam": "IPv4", 00:17:26.751 "traddr": "10.0.0.3", 00:17:26.751 "trsvcid": "4420", 00:17:26.751 "trtype": "TCP" 00:17:26.751 }, 00:17:26.751 "peer_address": { 00:17:26.751 "adrfam": "IPv4", 00:17:26.751 "traddr": "10.0.0.1", 00:17:26.751 "trsvcid": "49714", 00:17:26.751 "trtype": "TCP" 00:17:26.751 }, 00:17:26.751 "qid": 0, 00:17:26.751 "state": "enabled", 00:17:26.751 "thread": "nvmf_tgt_poll_group_000" 00:17:26.751 } 00:17:26.751 ]' 00:17:26.751 14:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:26.751 14:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:26.751 14:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:27.009 14:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:27.009 14:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:27.009 14:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.009 14:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.009 14:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.267 14:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDA3YTNlNWY1MTIxNDU1ZjUyNDViNjM3M2FjMmI4OTg4Nzc2MDI0NDliY2U1MGJisiJ9Hw==: --dhchap-ctrl-secret DHHC-1:03:ZTAwZGFiOTE0OWMxODdkYTlmMTIxNjBhOGQzYWQ2ZGIxMjE3NjI0ODM0OTY4OGUxMjljYWU1MTVkNjlmNDQ4N2pyDWw=: 00:17:27.267 14:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid 9ca3beaa-4765-4780-96a0-c04fd97e7435 -l 0 --dhchap-secret DHHC-1:00:ZDA3YTNlNWY1MTIxNDU1ZjUyNDViNjM3M2FjMmI4OTg4Nzc2MDI0NDliY2U1MGJisiJ9Hw==: --dhchap-ctrl-secret DHHC-1:03:ZTAwZGFiOTE0OWMxODdkYTlmMTIxNjBhOGQzYWQ2ZGIxMjE3NjI0ODM0OTY4OGUxMjljYWU1MTVkNjlmNDQ4N2pyDWw=: 00:17:31.448 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.448 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.448 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:17:31.448 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.448 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.448 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.448 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:31.448 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:31.448 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:31.448 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:17:31.448 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:31.448 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:31.448 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:31.448 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:31.448 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.448 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.448 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.448 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.448 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.448 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.448 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.448 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.448 00:17:31.448 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:31.448 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.448 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:31.707 14:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.707 14:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.707 14:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.707 14:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.707 14:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.707 14:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:31.707 { 00:17:31.707 "auth": { 00:17:31.707 "dhgroup": "null", 00:17:31.707 "digest": "sha256", 00:17:31.707 "state": "completed" 00:17:31.707 }, 00:17:31.707 "cntlid": 3, 00:17:31.707 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:17:31.707 "listen_address": { 00:17:31.707 "adrfam": "IPv4", 00:17:31.707 "traddr": "10.0.0.3", 00:17:31.707 "trsvcid": "4420", 00:17:31.707 "trtype": "TCP" 00:17:31.707 }, 00:17:31.707 "peer_address": { 00:17:31.707 "adrfam": "IPv4", 00:17:31.707 "traddr": "10.0.0.1", 00:17:31.707 "trsvcid": "40910", 00:17:31.707 "trtype": "TCP" 00:17:31.707 }, 00:17:31.707 "qid": 0, 00:17:31.707 "state": "enabled", 00:17:31.707 "thread": "nvmf_tgt_poll_group_000" 00:17:31.707 } 00:17:31.707 ]' 00:17:31.707 14:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:31.707 14:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:31.707 14:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:31.707 14:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:31.707 14:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:31.707 14:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.707 14:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.707 14:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.966 14:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTgyZDkxODc4ZTc5ZDU5MjY5MmViMWIyZmZmNGJhNjd47jto: --dhchap-ctrl-secret DHHC-1:02:ODFmNzk4OWZjOTY2OGZhZDUxMjE0NmY1MTBmOTc3ZjU0ZGRmNjFiYmM1YWMyMDQwWOc6bA==: 00:17:31.966 14:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid 9ca3beaa-4765-4780-96a0-c04fd97e7435 -l 0 --dhchap-secret DHHC-1:01:ZTgyZDkxODc4ZTc5ZDU5MjY5MmViMWIyZmZmNGJhNjd47jto: --dhchap-ctrl-secret DHHC-1:02:ODFmNzk4OWZjOTY2OGZhZDUxMjE0NmY1MTBmOTc3ZjU0ZGRmNjFiYmM1YWMyMDQwWOc6bA==: 00:17:32.592 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.592 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.592 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:17:32.592 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.592 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.592 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.592 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:32.592 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:32.592 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:32.850 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:17:32.850 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:32.850 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:32.850 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:32.850 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:32.850 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.850 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.850 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.850 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.850 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.850 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.850 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.850 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.108 00:17:33.108 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:33.108 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:33.108 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.365 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.365 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.366 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.366 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.366 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.366 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:33.366 { 00:17:33.366 "auth": { 00:17:33.366 "dhgroup": "null", 00:17:33.366 "digest": "sha256", 00:17:33.366 "state": "completed" 00:17:33.366 }, 00:17:33.366 "cntlid": 5, 00:17:33.366 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:17:33.366 "listen_address": { 00:17:33.366 "adrfam": "IPv4", 00:17:33.366 "traddr": "10.0.0.3", 00:17:33.366 "trsvcid": "4420", 00:17:33.366 "trtype": "TCP" 00:17:33.366 }, 00:17:33.366 "peer_address": { 00:17:33.366 "adrfam": "IPv4", 00:17:33.366 "traddr": "10.0.0.1", 00:17:33.366 "trsvcid": "40934", 00:17:33.366 "trtype": "TCP" 00:17:33.366 }, 00:17:33.366 "qid": 0, 00:17:33.366 "state": "enabled", 00:17:33.366 "thread": "nvmf_tgt_poll_group_000" 00:17:33.366 } 00:17:33.366 ]' 00:17:33.366 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:33.366 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:33.366 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:33.366 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:33.366 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:33.366 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.366 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.366 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.624 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODFiN2I5NWNmYjgwZDNmYzliYzJkYTU4N2ZhN2I3MzE5NzhlYjUxMjIxZTQwNmY5EVCk5g==: --dhchap-ctrl-secret DHHC-1:01:ZmZhMjcwOTUwNzNhMWI1MDkxNGRiMmQ3NTQ1ZTNhYTH5sdOn: 00:17:33.624 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid 9ca3beaa-4765-4780-96a0-c04fd97e7435 -l 0 --dhchap-secret DHHC-1:02:ODFiN2I5NWNmYjgwZDNmYzliYzJkYTU4N2ZhN2I3MzE5NzhlYjUxMjIxZTQwNmY5EVCk5g==: --dhchap-ctrl-secret DHHC-1:01:ZmZhMjcwOTUwNzNhMWI1MDkxNGRiMmQ3NTQ1ZTNhYTH5sdOn: 00:17:34.190 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.190 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.190 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:17:34.190 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.190 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.190 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.190 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:34.190 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:34.190 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:34.447 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:17:34.447 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:34.447 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:34.447 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:34.447 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:34.447 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.447 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key3 00:17:34.447 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.447 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.447 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.447 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:34.447 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:34.447 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:34.705 00:17:34.705 14:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:34.705 14:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.705 14:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:34.962 14:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.962 14:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.962 14:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.963 14:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.963 14:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.963 14:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:34.963 { 00:17:34.963 "auth": { 00:17:34.963 "dhgroup": "null", 00:17:34.963 "digest": "sha256", 00:17:34.963 "state": "completed" 00:17:34.963 }, 00:17:34.963 "cntlid": 7, 00:17:34.963 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:17:34.963 "listen_address": { 00:17:34.963 "adrfam": "IPv4", 00:17:34.963 "traddr": "10.0.0.3", 00:17:34.963 "trsvcid": "4420", 00:17:34.963 "trtype": "TCP" 00:17:34.963 }, 00:17:34.963 "peer_address": { 00:17:34.963 "adrfam": "IPv4", 00:17:34.963 "traddr": "10.0.0.1", 00:17:34.963 "trsvcid": "40948", 00:17:34.963 "trtype": "TCP" 00:17:34.963 }, 00:17:34.963 "qid": 0, 00:17:34.963 "state": "enabled", 00:17:34.963 "thread": "nvmf_tgt_poll_group_000" 00:17:34.963 } 00:17:34.963 ]' 00:17:34.963 14:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:34.963 14:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:34.963 14:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:34.963 14:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:34.963 14:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:34.963 14:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.963 14:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.963 14:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.221 14:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmM3YWI1MDhlMGY2ZjYwNDExYjcyNGIzMjYzNjkxZmIxMjY5YmE3NGViYzYwZjQyNjVkNGZkNjA3NTk4NWY5MVH/20U=: 00:17:35.221 14:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid 9ca3beaa-4765-4780-96a0-c04fd97e7435 -l 0 --dhchap-secret DHHC-1:03:ZmM3YWI1MDhlMGY2ZjYwNDExYjcyNGIzMjYzNjkxZmIxMjY5YmE3NGViYzYwZjQyNjVkNGZkNjA3NTk4NWY5MVH/20U=: 00:17:35.786 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.786 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.786 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:17:35.786 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.786 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.786 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.786 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:35.786 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:35.786 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:35.786 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:36.044 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:17:36.044 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:36.044 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:36.044 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:36.044 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:36.044 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.044 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.044 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.044 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.044 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.044 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.044 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.044 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.302 00:17:36.302 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:36.302 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.302 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:36.560 14:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.560 14:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.560 14:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.560 14:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.560 14:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.560 14:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:36.560 { 00:17:36.560 "auth": { 00:17:36.560 "dhgroup": "ffdhe2048", 00:17:36.560 "digest": "sha256", 00:17:36.560 "state": "completed" 00:17:36.560 }, 00:17:36.560 "cntlid": 9, 00:17:36.560 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:17:36.560 "listen_address": { 00:17:36.560 "adrfam": "IPv4", 00:17:36.560 "traddr": "10.0.0.3", 00:17:36.560 "trsvcid": "4420", 00:17:36.560 "trtype": "TCP" 00:17:36.560 }, 00:17:36.560 "peer_address": { 00:17:36.560 "adrfam": "IPv4", 00:17:36.560 "traddr": "10.0.0.1", 00:17:36.560 "trsvcid": "40972", 00:17:36.560 "trtype": "TCP" 00:17:36.560 }, 00:17:36.560 "qid": 0, 00:17:36.560 "state": "enabled", 00:17:36.560 "thread": "nvmf_tgt_poll_group_000" 00:17:36.560 } 00:17:36.560 ]' 00:17:36.560 14:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:36.560 14:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:36.560 14:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:36.560 14:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:36.560 14:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:36.560 14:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.560 14:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.560 14:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.817 14:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDA3YTNlNWY1MTIxNDU1ZjUyNDViNjM3M2FjMmI4OTg4Nzc2MDI0NDliY2U1MGJisiJ9Hw==: --dhchap-ctrl-secret DHHC-1:03:ZTAwZGFiOTE0OWMxODdkYTlmMTIxNjBhOGQzYWQ2ZGIxMjE3NjI0ODM0OTY4OGUxMjljYWU1MTVkNjlmNDQ4N2pyDWw=: 00:17:36.817 14:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid 9ca3beaa-4765-4780-96a0-c04fd97e7435 -l 0 --dhchap-secret DHHC-1:00:ZDA3YTNlNWY1MTIxNDU1ZjUyNDViNjM3M2FjMmI4OTg4Nzc2MDI0NDliY2U1MGJisiJ9Hw==: --dhchap-ctrl-secret DHHC-1:03:ZTAwZGFiOTE0OWMxODdkYTlmMTIxNjBhOGQzYWQ2ZGIxMjE3NjI0ODM0OTY4OGUxMjljYWU1MTVkNjlmNDQ4N2pyDWw=: 00:17:37.382 14:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.382 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.382 14:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:17:37.382 14:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.382 14:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.382 14:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.382 14:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:37.382 14:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:37.382 14:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:37.640 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:17:37.640 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:37.640 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:37.640 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:37.640 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:37.640 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.640 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.640 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.640 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.640 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.640 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.640 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.640 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.943 00:17:37.943 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:37.943 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:37.943 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.201 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.201 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.201 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.201 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.201 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.201 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:38.201 { 00:17:38.201 "auth": { 00:17:38.201 "dhgroup": "ffdhe2048", 00:17:38.201 "digest": "sha256", 00:17:38.201 "state": "completed" 00:17:38.201 }, 00:17:38.201 "cntlid": 11, 00:17:38.201 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:17:38.201 "listen_address": { 00:17:38.201 "adrfam": "IPv4", 00:17:38.201 "traddr": "10.0.0.3", 00:17:38.201 "trsvcid": "4420", 00:17:38.201 "trtype": "TCP" 00:17:38.201 }, 00:17:38.201 "peer_address": { 00:17:38.201 "adrfam": "IPv4", 00:17:38.201 "traddr": "10.0.0.1", 00:17:38.201 "trsvcid": "54082", 00:17:38.201 "trtype": "TCP" 00:17:38.201 }, 00:17:38.201 "qid": 0, 00:17:38.201 "state": "enabled", 00:17:38.201 "thread": "nvmf_tgt_poll_group_000" 00:17:38.201 } 00:17:38.201 ]' 00:17:38.201 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:38.201 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:38.201 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:38.201 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:38.201 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:38.201 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.201 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.201 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.459 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTgyZDkxODc4ZTc5ZDU5MjY5MmViMWIyZmZmNGJhNjd47jto: --dhchap-ctrl-secret DHHC-1:02:ODFmNzk4OWZjOTY2OGZhZDUxMjE0NmY1MTBmOTc3ZjU0ZGRmNjFiYmM1YWMyMDQwWOc6bA==: 00:17:38.459 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid 9ca3beaa-4765-4780-96a0-c04fd97e7435 -l 0 --dhchap-secret DHHC-1:01:ZTgyZDkxODc4ZTc5ZDU5MjY5MmViMWIyZmZmNGJhNjd47jto: --dhchap-ctrl-secret DHHC-1:02:ODFmNzk4OWZjOTY2OGZhZDUxMjE0NmY1MTBmOTc3ZjU0ZGRmNjFiYmM1YWMyMDQwWOc6bA==: 00:17:39.025 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.025 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.025 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:17:39.025 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.025 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.025 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.025 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:39.025 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:39.025 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:39.294 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:17:39.294 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:39.294 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:39.294 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:39.294 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:39.294 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.294 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:39.294 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.294 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.294 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.294 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:39.294 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:39.294 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:39.552 00:17:39.552 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:39.552 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.552 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:39.810 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.810 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.810 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.810 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.810 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.810 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:39.810 { 00:17:39.810 "auth": { 00:17:39.810 "dhgroup": "ffdhe2048", 00:17:39.810 "digest": "sha256", 00:17:39.810 "state": "completed" 00:17:39.810 }, 00:17:39.810 "cntlid": 13, 00:17:39.810 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:17:39.810 "listen_address": { 00:17:39.810 "adrfam": "IPv4", 00:17:39.810 "traddr": "10.0.0.3", 00:17:39.810 "trsvcid": "4420", 00:17:39.810 "trtype": "TCP" 00:17:39.810 }, 00:17:39.810 "peer_address": { 00:17:39.810 "adrfam": "IPv4", 00:17:39.810 "traddr": "10.0.0.1", 00:17:39.810 "trsvcid": "54118", 00:17:39.810 "trtype": "TCP" 00:17:39.810 }, 00:17:39.810 "qid": 0, 00:17:39.810 "state": "enabled", 00:17:39.810 "thread": "nvmf_tgt_poll_group_000" 00:17:39.810 } 00:17:39.810 ]' 00:17:39.810 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:39.810 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:39.810 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:40.069 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:40.069 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:40.069 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.069 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.069 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.069 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODFiN2I5NWNmYjgwZDNmYzliYzJkYTU4N2ZhN2I3MzE5NzhlYjUxMjIxZTQwNmY5EVCk5g==: --dhchap-ctrl-secret DHHC-1:01:ZmZhMjcwOTUwNzNhMWI1MDkxNGRiMmQ3NTQ1ZTNhYTH5sdOn: 00:17:40.069 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid 9ca3beaa-4765-4780-96a0-c04fd97e7435 -l 0 --dhchap-secret DHHC-1:02:ODFiN2I5NWNmYjgwZDNmYzliYzJkYTU4N2ZhN2I3MzE5NzhlYjUxMjIxZTQwNmY5EVCk5g==: --dhchap-ctrl-secret DHHC-1:01:ZmZhMjcwOTUwNzNhMWI1MDkxNGRiMmQ3NTQ1ZTNhYTH5sdOn: 00:17:41.002 14:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.002 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.002 14:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:17:41.002 14:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.002 14:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.002 14:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.002 14:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:41.002 14:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:41.002 14:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:41.002 14:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:17:41.002 14:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:41.002 14:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:41.002 14:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:41.002 14:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:41.002 14:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.002 14:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key3 00:17:41.002 14:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.002 14:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.002 14:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.002 14:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:41.002 14:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:41.002 14:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:41.260 00:17:41.260 14:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:41.260 14:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:41.260 14:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.518 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.518 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.518 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.518 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.518 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.518 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:41.518 { 00:17:41.518 "auth": { 00:17:41.518 "dhgroup": "ffdhe2048", 00:17:41.518 "digest": "sha256", 00:17:41.518 "state": "completed" 00:17:41.518 }, 00:17:41.518 "cntlid": 15, 00:17:41.518 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:17:41.518 "listen_address": { 00:17:41.518 "adrfam": "IPv4", 00:17:41.518 "traddr": "10.0.0.3", 00:17:41.518 "trsvcid": "4420", 00:17:41.518 "trtype": "TCP" 00:17:41.518 }, 00:17:41.518 "peer_address": { 00:17:41.518 "adrfam": "IPv4", 00:17:41.518 "traddr": "10.0.0.1", 00:17:41.518 "trsvcid": "54152", 00:17:41.518 "trtype": "TCP" 00:17:41.518 }, 00:17:41.518 "qid": 0, 00:17:41.518 "state": "enabled", 00:17:41.518 "thread": "nvmf_tgt_poll_group_000" 00:17:41.518 } 00:17:41.518 ]' 00:17:41.518 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:41.518 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:41.518 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:41.518 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:41.518 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:41.518 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.518 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.518 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.776 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmM3YWI1MDhlMGY2ZjYwNDExYjcyNGIzMjYzNjkxZmIxMjY5YmE3NGViYzYwZjQyNjVkNGZkNjA3NTk4NWY5MVH/20U=: 00:17:41.776 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid 9ca3beaa-4765-4780-96a0-c04fd97e7435 -l 0 --dhchap-secret DHHC-1:03:ZmM3YWI1MDhlMGY2ZjYwNDExYjcyNGIzMjYzNjkxZmIxMjY5YmE3NGViYzYwZjQyNjVkNGZkNjA3NTk4NWY5MVH/20U=: 00:17:42.342 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.342 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.342 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:17:42.342 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.342 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.342 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.342 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:42.342 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:42.342 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:42.342 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:42.600 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:17:42.600 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:42.600 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:42.600 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:42.600 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:42.600 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.600 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.600 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.600 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.600 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.600 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.600 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.600 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.858 00:17:42.858 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:42.858 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:42.858 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.117 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.117 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.117 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.117 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.117 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.117 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:43.117 { 00:17:43.117 "auth": { 00:17:43.117 "dhgroup": "ffdhe3072", 00:17:43.117 "digest": "sha256", 00:17:43.117 "state": "completed" 00:17:43.117 }, 00:17:43.117 "cntlid": 17, 00:17:43.117 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:17:43.117 "listen_address": { 00:17:43.117 "adrfam": "IPv4", 00:17:43.117 "traddr": "10.0.0.3", 00:17:43.117 "trsvcid": "4420", 00:17:43.117 "trtype": "TCP" 00:17:43.117 }, 00:17:43.117 "peer_address": { 00:17:43.117 "adrfam": "IPv4", 00:17:43.117 "traddr": "10.0.0.1", 00:17:43.117 "trsvcid": "54188", 00:17:43.117 "trtype": "TCP" 00:17:43.117 }, 00:17:43.117 "qid": 0, 00:17:43.117 "state": "enabled", 00:17:43.117 "thread": "nvmf_tgt_poll_group_000" 00:17:43.117 } 00:17:43.117 ]' 00:17:43.117 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:43.117 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:43.117 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:43.117 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:43.117 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:43.117 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.117 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.117 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.375 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDA3YTNlNWY1MTIxNDU1ZjUyNDViNjM3M2FjMmI4OTg4Nzc2MDI0NDliY2U1MGJisiJ9Hw==: --dhchap-ctrl-secret DHHC-1:03:ZTAwZGFiOTE0OWMxODdkYTlmMTIxNjBhOGQzYWQ2ZGIxMjE3NjI0ODM0OTY4OGUxMjljYWU1MTVkNjlmNDQ4N2pyDWw=: 00:17:43.375 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid 9ca3beaa-4765-4780-96a0-c04fd97e7435 -l 0 --dhchap-secret DHHC-1:00:ZDA3YTNlNWY1MTIxNDU1ZjUyNDViNjM3M2FjMmI4OTg4Nzc2MDI0NDliY2U1MGJisiJ9Hw==: --dhchap-ctrl-secret DHHC-1:03:ZTAwZGFiOTE0OWMxODdkYTlmMTIxNjBhOGQzYWQ2ZGIxMjE3NjI0ODM0OTY4OGUxMjljYWU1MTVkNjlmNDQ4N2pyDWw=: 00:17:43.942 14:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.942 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.942 14:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:17:43.942 14:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.942 14:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.942 14:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.942 14:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:43.942 14:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:43.942 14:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:44.200 14:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:17:44.200 14:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:44.200 14:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:44.200 14:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:44.200 14:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:44.200 14:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.200 14:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.200 14:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.200 14:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.200 14:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.201 14:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.201 14:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.201 14:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.459 00:17:44.459 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:44.459 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.459 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:44.717 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.717 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.717 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.717 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.717 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.717 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:44.717 { 00:17:44.717 "auth": { 00:17:44.717 "dhgroup": "ffdhe3072", 00:17:44.717 "digest": "sha256", 00:17:44.717 "state": "completed" 00:17:44.717 }, 00:17:44.717 "cntlid": 19, 00:17:44.717 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:17:44.717 "listen_address": { 00:17:44.717 "adrfam": "IPv4", 00:17:44.717 "traddr": "10.0.0.3", 00:17:44.717 "trsvcid": "4420", 00:17:44.717 "trtype": "TCP" 00:17:44.717 }, 00:17:44.718 "peer_address": { 00:17:44.718 "adrfam": "IPv4", 00:17:44.718 "traddr": "10.0.0.1", 00:17:44.718 "trsvcid": "54224", 00:17:44.718 "trtype": "TCP" 00:17:44.718 }, 00:17:44.718 "qid": 0, 00:17:44.718 "state": "enabled", 00:17:44.718 "thread": "nvmf_tgt_poll_group_000" 00:17:44.718 } 00:17:44.718 ]' 00:17:44.718 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:44.718 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:44.718 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:44.976 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:44.976 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:44.976 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.976 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.976 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.262 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTgyZDkxODc4ZTc5ZDU5MjY5MmViMWIyZmZmNGJhNjd47jto: --dhchap-ctrl-secret DHHC-1:02:ODFmNzk4OWZjOTY2OGZhZDUxMjE0NmY1MTBmOTc3ZjU0ZGRmNjFiYmM1YWMyMDQwWOc6bA==: 00:17:45.262 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid 9ca3beaa-4765-4780-96a0-c04fd97e7435 -l 0 --dhchap-secret DHHC-1:01:ZTgyZDkxODc4ZTc5ZDU5MjY5MmViMWIyZmZmNGJhNjd47jto: --dhchap-ctrl-secret DHHC-1:02:ODFmNzk4OWZjOTY2OGZhZDUxMjE0NmY1MTBmOTc3ZjU0ZGRmNjFiYmM1YWMyMDQwWOc6bA==: 00:17:45.841 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.841 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.841 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:17:45.841 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.841 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.841 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.841 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:45.841 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:45.841 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:45.841 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:17:45.841 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:45.841 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:45.841 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:45.841 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:45.841 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.841 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.841 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.841 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.841 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.841 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.841 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.841 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.099 00:17:46.099 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:46.099 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:46.099 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.357 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.357 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.357 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.357 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.357 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.357 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:46.357 { 00:17:46.357 "auth": { 00:17:46.357 "dhgroup": "ffdhe3072", 00:17:46.357 "digest": "sha256", 00:17:46.357 "state": "completed" 00:17:46.357 }, 00:17:46.357 "cntlid": 21, 00:17:46.357 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:17:46.357 "listen_address": { 00:17:46.357 "adrfam": "IPv4", 00:17:46.357 "traddr": "10.0.0.3", 00:17:46.357 "trsvcid": "4420", 00:17:46.357 "trtype": "TCP" 00:17:46.357 }, 00:17:46.357 "peer_address": { 00:17:46.357 "adrfam": "IPv4", 00:17:46.357 "traddr": "10.0.0.1", 00:17:46.357 "trsvcid": "54262", 00:17:46.357 "trtype": "TCP" 00:17:46.357 }, 00:17:46.357 "qid": 0, 00:17:46.357 "state": "enabled", 00:17:46.357 "thread": "nvmf_tgt_poll_group_000" 00:17:46.357 } 00:17:46.357 ]' 00:17:46.357 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:46.357 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:46.357 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:46.616 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:46.616 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:46.616 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.616 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.616 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.616 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODFiN2I5NWNmYjgwZDNmYzliYzJkYTU4N2ZhN2I3MzE5NzhlYjUxMjIxZTQwNmY5EVCk5g==: --dhchap-ctrl-secret DHHC-1:01:ZmZhMjcwOTUwNzNhMWI1MDkxNGRiMmQ3NTQ1ZTNhYTH5sdOn: 00:17:46.616 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid 9ca3beaa-4765-4780-96a0-c04fd97e7435 -l 0 --dhchap-secret DHHC-1:02:ODFiN2I5NWNmYjgwZDNmYzliYzJkYTU4N2ZhN2I3MzE5NzhlYjUxMjIxZTQwNmY5EVCk5g==: --dhchap-ctrl-secret DHHC-1:01:ZmZhMjcwOTUwNzNhMWI1MDkxNGRiMmQ3NTQ1ZTNhYTH5sdOn: 00:17:47.183 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.440 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.440 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:17:47.440 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.440 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.440 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.440 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:47.440 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:47.440 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:47.441 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:17:47.441 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:47.441 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:47.441 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:47.441 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:47.441 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.441 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key3 00:17:47.441 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.441 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.441 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.441 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:47.441 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:47.441 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:47.717 00:17:47.975 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:47.975 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:47.975 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.975 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.975 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.975 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.975 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.975 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.975 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:47.975 { 00:17:47.975 "auth": { 00:17:47.975 "dhgroup": "ffdhe3072", 00:17:47.975 "digest": "sha256", 00:17:47.975 "state": "completed" 00:17:47.975 }, 00:17:47.975 "cntlid": 23, 00:17:47.975 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:17:47.975 "listen_address": { 00:17:47.975 "adrfam": "IPv4", 00:17:47.975 "traddr": "10.0.0.3", 00:17:47.975 "trsvcid": "4420", 00:17:47.975 "trtype": "TCP" 00:17:47.975 }, 00:17:47.975 "peer_address": { 00:17:47.975 "adrfam": "IPv4", 00:17:47.975 "traddr": "10.0.0.1", 00:17:47.975 "trsvcid": "42322", 00:17:47.975 "trtype": "TCP" 00:17:47.975 }, 00:17:47.975 "qid": 0, 00:17:47.975 "state": "enabled", 00:17:47.975 "thread": "nvmf_tgt_poll_group_000" 00:17:47.975 } 00:17:47.975 ]' 00:17:47.975 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:47.975 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:47.975 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:48.234 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:48.234 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:48.234 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.234 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.234 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.492 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmM3YWI1MDhlMGY2ZjYwNDExYjcyNGIzMjYzNjkxZmIxMjY5YmE3NGViYzYwZjQyNjVkNGZkNjA3NTk4NWY5MVH/20U=: 00:17:48.492 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid 9ca3beaa-4765-4780-96a0-c04fd97e7435 -l 0 --dhchap-secret DHHC-1:03:ZmM3YWI1MDhlMGY2ZjYwNDExYjcyNGIzMjYzNjkxZmIxMjY5YmE3NGViYzYwZjQyNjVkNGZkNjA3NTk4NWY5MVH/20U=: 00:17:49.057 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.057 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.058 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:17:49.058 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.058 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.058 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.058 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:49.058 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:49.058 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:49.058 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:49.058 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:17:49.058 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:49.058 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:49.058 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:49.058 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:49.058 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.058 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.058 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.058 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.058 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.058 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.058 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.058 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.316 00:17:49.574 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:49.574 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:49.574 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.574 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.574 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.574 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.574 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.574 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.574 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:49.574 { 00:17:49.574 "auth": { 00:17:49.574 "dhgroup": "ffdhe4096", 00:17:49.574 "digest": "sha256", 00:17:49.574 "state": "completed" 00:17:49.574 }, 00:17:49.574 "cntlid": 25, 00:17:49.574 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:17:49.574 "listen_address": { 00:17:49.574 "adrfam": "IPv4", 00:17:49.574 "traddr": "10.0.0.3", 00:17:49.574 "trsvcid": "4420", 00:17:49.574 "trtype": "TCP" 00:17:49.574 }, 00:17:49.574 "peer_address": { 00:17:49.574 "adrfam": "IPv4", 00:17:49.574 "traddr": "10.0.0.1", 00:17:49.574 "trsvcid": "42356", 00:17:49.574 "trtype": "TCP" 00:17:49.574 }, 00:17:49.574 "qid": 0, 00:17:49.574 "state": "enabled", 00:17:49.574 "thread": "nvmf_tgt_poll_group_000" 00:17:49.575 } 00:17:49.575 ]' 00:17:49.575 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:49.832 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:49.832 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:49.832 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:49.832 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:49.832 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.832 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.832 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.090 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDA3YTNlNWY1MTIxNDU1ZjUyNDViNjM3M2FjMmI4OTg4Nzc2MDI0NDliY2U1MGJisiJ9Hw==: --dhchap-ctrl-secret DHHC-1:03:ZTAwZGFiOTE0OWMxODdkYTlmMTIxNjBhOGQzYWQ2ZGIxMjE3NjI0ODM0OTY4OGUxMjljYWU1MTVkNjlmNDQ4N2pyDWw=: 00:17:50.090 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid 9ca3beaa-4765-4780-96a0-c04fd97e7435 -l 0 --dhchap-secret DHHC-1:00:ZDA3YTNlNWY1MTIxNDU1ZjUyNDViNjM3M2FjMmI4OTg4Nzc2MDI0NDliY2U1MGJisiJ9Hw==: --dhchap-ctrl-secret DHHC-1:03:ZTAwZGFiOTE0OWMxODdkYTlmMTIxNjBhOGQzYWQ2ZGIxMjE3NjI0ODM0OTY4OGUxMjljYWU1MTVkNjlmNDQ4N2pyDWw=: 00:17:50.656 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.656 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.656 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:17:50.656 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.656 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.656 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.656 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:50.656 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:50.656 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:50.656 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:17:50.656 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:50.656 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:50.656 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:50.656 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:50.656 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.656 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.656 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.656 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.656 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.656 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.656 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.656 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:51.221 00:17:51.221 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:51.221 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:51.221 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.221 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.221 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.221 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.221 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.221 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.221 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:51.221 { 00:17:51.221 "auth": { 00:17:51.221 "dhgroup": "ffdhe4096", 00:17:51.221 "digest": "sha256", 00:17:51.221 "state": "completed" 00:17:51.221 }, 00:17:51.221 "cntlid": 27, 00:17:51.221 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:17:51.221 "listen_address": { 00:17:51.221 "adrfam": "IPv4", 00:17:51.221 "traddr": "10.0.0.3", 00:17:51.221 "trsvcid": "4420", 00:17:51.221 "trtype": "TCP" 00:17:51.221 }, 00:17:51.221 "peer_address": { 00:17:51.221 "adrfam": "IPv4", 00:17:51.221 "traddr": "10.0.0.1", 00:17:51.221 "trsvcid": "42380", 00:17:51.221 "trtype": "TCP" 00:17:51.221 }, 00:17:51.221 "qid": 0, 00:17:51.221 "state": "enabled", 00:17:51.221 "thread": "nvmf_tgt_poll_group_000" 00:17:51.221 } 00:17:51.221 ]' 00:17:51.221 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:51.480 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:51.480 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:51.480 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:51.480 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:51.480 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.480 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.480 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.738 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTgyZDkxODc4ZTc5ZDU5MjY5MmViMWIyZmZmNGJhNjd47jto: --dhchap-ctrl-secret DHHC-1:02:ODFmNzk4OWZjOTY2OGZhZDUxMjE0NmY1MTBmOTc3ZjU0ZGRmNjFiYmM1YWMyMDQwWOc6bA==: 00:17:51.738 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid 9ca3beaa-4765-4780-96a0-c04fd97e7435 -l 0 --dhchap-secret DHHC-1:01:ZTgyZDkxODc4ZTc5ZDU5MjY5MmViMWIyZmZmNGJhNjd47jto: --dhchap-ctrl-secret DHHC-1:02:ODFmNzk4OWZjOTY2OGZhZDUxMjE0NmY1MTBmOTc3ZjU0ZGRmNjFiYmM1YWMyMDQwWOc6bA==: 00:17:52.305 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.305 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.305 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:17:52.305 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.305 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.305 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.305 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:52.305 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:52.305 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:52.305 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:17:52.305 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:52.305 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:52.305 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:52.305 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:52.305 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.305 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.305 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.305 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.305 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.305 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.305 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.305 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.868 00:17:52.868 14:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:52.868 14:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:52.868 14:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.868 14:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.868 14:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.868 14:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.868 14:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.868 14:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.868 14:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:52.868 { 00:17:52.868 "auth": { 00:17:52.868 "dhgroup": "ffdhe4096", 00:17:52.868 "digest": "sha256", 00:17:52.868 "state": "completed" 00:17:52.868 }, 00:17:52.868 "cntlid": 29, 00:17:52.868 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:17:52.868 "listen_address": { 00:17:52.868 "adrfam": "IPv4", 00:17:52.868 "traddr": "10.0.0.3", 00:17:52.868 "trsvcid": "4420", 00:17:52.868 "trtype": "TCP" 00:17:52.868 }, 00:17:52.868 "peer_address": { 00:17:52.868 "adrfam": "IPv4", 00:17:52.868 "traddr": "10.0.0.1", 00:17:52.868 "trsvcid": "42396", 00:17:52.868 "trtype": "TCP" 00:17:52.868 }, 00:17:52.868 "qid": 0, 00:17:52.868 "state": "enabled", 00:17:52.868 "thread": "nvmf_tgt_poll_group_000" 00:17:52.868 } 00:17:52.868 ]' 00:17:52.868 14:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:53.126 14:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:53.126 14:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:53.126 14:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:53.126 14:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:53.126 14:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.126 14:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.126 14:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.384 14:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODFiN2I5NWNmYjgwZDNmYzliYzJkYTU4N2ZhN2I3MzE5NzhlYjUxMjIxZTQwNmY5EVCk5g==: --dhchap-ctrl-secret DHHC-1:01:ZmZhMjcwOTUwNzNhMWI1MDkxNGRiMmQ3NTQ1ZTNhYTH5sdOn: 00:17:53.384 14:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid 9ca3beaa-4765-4780-96a0-c04fd97e7435 -l 0 --dhchap-secret DHHC-1:02:ODFiN2I5NWNmYjgwZDNmYzliYzJkYTU4N2ZhN2I3MzE5NzhlYjUxMjIxZTQwNmY5EVCk5g==: --dhchap-ctrl-secret DHHC-1:01:ZmZhMjcwOTUwNzNhMWI1MDkxNGRiMmQ3NTQ1ZTNhYTH5sdOn: 00:17:53.950 14:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.950 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.950 14:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:17:53.950 14:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.950 14:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.950 14:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.950 14:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:53.950 14:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:53.950 14:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:54.209 14:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:17:54.209 14:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:54.209 14:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:54.209 14:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:54.209 14:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:54.209 14:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.209 14:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key3 00:17:54.209 14:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.209 14:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.209 14:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.209 14:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:54.209 14:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:54.209 14:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:54.467 00:17:54.467 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:54.467 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.467 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:54.726 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.726 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.726 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.726 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.726 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.726 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:54.726 { 00:17:54.726 "auth": { 00:17:54.726 "dhgroup": "ffdhe4096", 00:17:54.726 "digest": "sha256", 00:17:54.726 "state": "completed" 00:17:54.726 }, 00:17:54.726 "cntlid": 31, 00:17:54.726 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:17:54.726 "listen_address": { 00:17:54.726 "adrfam": "IPv4", 00:17:54.726 "traddr": "10.0.0.3", 00:17:54.726 "trsvcid": "4420", 00:17:54.726 "trtype": "TCP" 00:17:54.726 }, 00:17:54.726 "peer_address": { 00:17:54.726 "adrfam": "IPv4", 00:17:54.726 "traddr": "10.0.0.1", 00:17:54.726 "trsvcid": "42414", 00:17:54.726 "trtype": "TCP" 00:17:54.726 }, 00:17:54.726 "qid": 0, 00:17:54.726 "state": "enabled", 00:17:54.726 "thread": "nvmf_tgt_poll_group_000" 00:17:54.726 } 00:17:54.726 ]' 00:17:54.726 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:54.726 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:54.726 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:54.726 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:54.726 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:54.726 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.726 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.726 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.984 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmM3YWI1MDhlMGY2ZjYwNDExYjcyNGIzMjYzNjkxZmIxMjY5YmE3NGViYzYwZjQyNjVkNGZkNjA3NTk4NWY5MVH/20U=: 00:17:54.984 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid 9ca3beaa-4765-4780-96a0-c04fd97e7435 -l 0 --dhchap-secret DHHC-1:03:ZmM3YWI1MDhlMGY2ZjYwNDExYjcyNGIzMjYzNjkxZmIxMjY5YmE3NGViYzYwZjQyNjVkNGZkNjA3NTk4NWY5MVH/20U=: 00:17:55.581 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.581 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.581 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:17:55.581 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.581 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.581 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.581 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:55.581 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:55.581 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:55.581 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:55.838 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:17:55.838 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:55.838 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:55.838 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:55.838 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:55.838 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.838 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.838 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.838 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.838 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.838 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.838 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.838 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.404 00:17:56.404 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:56.404 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:56.404 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.404 14:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.404 14:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.404 14:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.404 14:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.404 14:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.404 14:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:56.404 { 00:17:56.404 "auth": { 00:17:56.404 "dhgroup": "ffdhe6144", 00:17:56.404 "digest": "sha256", 00:17:56.404 "state": "completed" 00:17:56.405 }, 00:17:56.405 "cntlid": 33, 00:17:56.405 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:17:56.405 "listen_address": { 00:17:56.405 "adrfam": "IPv4", 00:17:56.405 "traddr": "10.0.0.3", 00:17:56.405 "trsvcid": "4420", 00:17:56.405 "trtype": "TCP" 00:17:56.405 }, 00:17:56.405 "peer_address": { 00:17:56.405 "adrfam": "IPv4", 00:17:56.405 "traddr": "10.0.0.1", 00:17:56.405 "trsvcid": "42442", 00:17:56.405 "trtype": "TCP" 00:17:56.405 }, 00:17:56.405 "qid": 0, 00:17:56.405 "state": "enabled", 00:17:56.405 "thread": "nvmf_tgt_poll_group_000" 00:17:56.405 } 00:17:56.405 ]' 00:17:56.405 14:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:56.662 14:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:56.662 14:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:56.662 14:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:56.662 14:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:56.662 14:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.663 14:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.663 14:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.921 14:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDA3YTNlNWY1MTIxNDU1ZjUyNDViNjM3M2FjMmI4OTg4Nzc2MDI0NDliY2U1MGJisiJ9Hw==: --dhchap-ctrl-secret DHHC-1:03:ZTAwZGFiOTE0OWMxODdkYTlmMTIxNjBhOGQzYWQ2ZGIxMjE3NjI0ODM0OTY4OGUxMjljYWU1MTVkNjlmNDQ4N2pyDWw=: 00:17:56.921 14:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid 9ca3beaa-4765-4780-96a0-c04fd97e7435 -l 0 --dhchap-secret DHHC-1:00:ZDA3YTNlNWY1MTIxNDU1ZjUyNDViNjM3M2FjMmI4OTg4Nzc2MDI0NDliY2U1MGJisiJ9Hw==: --dhchap-ctrl-secret DHHC-1:03:ZTAwZGFiOTE0OWMxODdkYTlmMTIxNjBhOGQzYWQ2ZGIxMjE3NjI0ODM0OTY4OGUxMjljYWU1MTVkNjlmNDQ4N2pyDWw=: 00:17:57.487 14:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.487 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.487 14:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:17:57.487 14:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.487 14:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.487 14:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.487 14:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:57.487 14:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:57.487 14:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:57.487 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:17:57.487 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:57.487 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:57.487 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:57.487 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:57.487 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.487 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.487 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.487 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.487 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.487 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.487 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.487 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.054 00:17:58.054 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:58.054 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:58.054 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.312 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.312 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.312 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.312 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.312 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.312 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:58.312 { 00:17:58.312 "auth": { 00:17:58.312 "dhgroup": "ffdhe6144", 00:17:58.312 "digest": "sha256", 00:17:58.312 "state": "completed" 00:17:58.312 }, 00:17:58.312 "cntlid": 35, 00:17:58.312 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:17:58.312 "listen_address": { 00:17:58.312 "adrfam": "IPv4", 00:17:58.312 "traddr": "10.0.0.3", 00:17:58.312 "trsvcid": "4420", 00:17:58.312 "trtype": "TCP" 00:17:58.312 }, 00:17:58.312 "peer_address": { 00:17:58.312 "adrfam": "IPv4", 00:17:58.312 "traddr": "10.0.0.1", 00:17:58.312 "trsvcid": "37694", 00:17:58.312 "trtype": "TCP" 00:17:58.312 }, 00:17:58.312 "qid": 0, 00:17:58.312 "state": "enabled", 00:17:58.312 "thread": "nvmf_tgt_poll_group_000" 00:17:58.312 } 00:17:58.312 ]' 00:17:58.312 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:58.312 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:58.312 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:58.312 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:58.312 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:58.312 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.312 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.312 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.570 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTgyZDkxODc4ZTc5ZDU5MjY5MmViMWIyZmZmNGJhNjd47jto: --dhchap-ctrl-secret DHHC-1:02:ODFmNzk4OWZjOTY2OGZhZDUxMjE0NmY1MTBmOTc3ZjU0ZGRmNjFiYmM1YWMyMDQwWOc6bA==: 00:17:58.570 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid 9ca3beaa-4765-4780-96a0-c04fd97e7435 -l 0 --dhchap-secret DHHC-1:01:ZTgyZDkxODc4ZTc5ZDU5MjY5MmViMWIyZmZmNGJhNjd47jto: --dhchap-ctrl-secret DHHC-1:02:ODFmNzk4OWZjOTY2OGZhZDUxMjE0NmY1MTBmOTc3ZjU0ZGRmNjFiYmM1YWMyMDQwWOc6bA==: 00:17:59.136 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.136 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.136 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:17:59.136 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.136 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.136 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.136 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:59.136 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:59.136 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:59.394 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:17:59.394 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:59.394 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:59.394 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:59.394 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:59.394 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.394 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.394 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.394 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.394 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.394 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.394 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.394 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.959 00:17:59.959 14:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:59.959 14:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.959 14:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:59.959 14:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.959 14:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.959 14:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.959 14:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.959 14:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.959 14:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:59.959 { 00:17:59.959 "auth": { 00:17:59.959 "dhgroup": "ffdhe6144", 00:17:59.959 "digest": "sha256", 00:17:59.959 "state": "completed" 00:17:59.959 }, 00:17:59.959 "cntlid": 37, 00:17:59.959 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:17:59.959 "listen_address": { 00:17:59.959 "adrfam": "IPv4", 00:17:59.959 "traddr": "10.0.0.3", 00:17:59.959 "trsvcid": "4420", 00:17:59.959 "trtype": "TCP" 00:17:59.959 }, 00:17:59.959 "peer_address": { 00:17:59.959 "adrfam": "IPv4", 00:17:59.959 "traddr": "10.0.0.1", 00:17:59.959 "trsvcid": "37726", 00:17:59.959 "trtype": "TCP" 00:17:59.959 }, 00:17:59.959 "qid": 0, 00:17:59.959 "state": "enabled", 00:17:59.959 "thread": "nvmf_tgt_poll_group_000" 00:17:59.959 } 00:17:59.959 ]' 00:17:59.959 14:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:00.216 14:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:00.216 14:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:00.216 14:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:00.216 14:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:00.216 14:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.216 14:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.216 14:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.473 14:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODFiN2I5NWNmYjgwZDNmYzliYzJkYTU4N2ZhN2I3MzE5NzhlYjUxMjIxZTQwNmY5EVCk5g==: --dhchap-ctrl-secret DHHC-1:01:ZmZhMjcwOTUwNzNhMWI1MDkxNGRiMmQ3NTQ1ZTNhYTH5sdOn: 00:18:00.473 14:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid 9ca3beaa-4765-4780-96a0-c04fd97e7435 -l 0 --dhchap-secret DHHC-1:02:ODFiN2I5NWNmYjgwZDNmYzliYzJkYTU4N2ZhN2I3MzE5NzhlYjUxMjIxZTQwNmY5EVCk5g==: --dhchap-ctrl-secret DHHC-1:01:ZmZhMjcwOTUwNzNhMWI1MDkxNGRiMmQ3NTQ1ZTNhYTH5sdOn: 00:18:01.038 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.038 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.038 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:18:01.038 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.038 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.038 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.038 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:01.038 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:01.038 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:01.038 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:18:01.038 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:01.038 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:01.038 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:01.038 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:01.038 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.038 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key3 00:18:01.038 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.038 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.297 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.297 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:01.297 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:01.297 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:01.597 00:18:01.597 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:01.597 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:01.597 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.856 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.856 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.856 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.856 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.856 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.856 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:01.856 { 00:18:01.856 "auth": { 00:18:01.856 "dhgroup": "ffdhe6144", 00:18:01.856 "digest": "sha256", 00:18:01.856 "state": "completed" 00:18:01.856 }, 00:18:01.856 "cntlid": 39, 00:18:01.856 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:18:01.856 "listen_address": { 00:18:01.856 "adrfam": "IPv4", 00:18:01.856 "traddr": "10.0.0.3", 00:18:01.856 "trsvcid": "4420", 00:18:01.856 "trtype": "TCP" 00:18:01.856 }, 00:18:01.856 "peer_address": { 00:18:01.856 "adrfam": "IPv4", 00:18:01.856 "traddr": "10.0.0.1", 00:18:01.856 "trsvcid": "37752", 00:18:01.856 "trtype": "TCP" 00:18:01.856 }, 00:18:01.856 "qid": 0, 00:18:01.856 "state": "enabled", 00:18:01.856 "thread": "nvmf_tgt_poll_group_000" 00:18:01.856 } 00:18:01.856 ]' 00:18:01.856 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:01.856 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:01.856 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:01.856 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:01.856 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:01.856 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.856 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.856 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.114 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmM3YWI1MDhlMGY2ZjYwNDExYjcyNGIzMjYzNjkxZmIxMjY5YmE3NGViYzYwZjQyNjVkNGZkNjA3NTk4NWY5MVH/20U=: 00:18:02.114 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid 9ca3beaa-4765-4780-96a0-c04fd97e7435 -l 0 --dhchap-secret DHHC-1:03:ZmM3YWI1MDhlMGY2ZjYwNDExYjcyNGIzMjYzNjkxZmIxMjY5YmE3NGViYzYwZjQyNjVkNGZkNjA3NTk4NWY5MVH/20U=: 00:18:02.680 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.680 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.680 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:18:02.680 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.680 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.680 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.680 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:02.680 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:02.680 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:02.680 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:02.937 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:18:02.937 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:02.937 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:02.937 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:02.937 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:02.937 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.937 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.937 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.937 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.937 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.937 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.937 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.937 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.502 00:18:03.502 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:03.502 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.502 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:03.759 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.759 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.759 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.759 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.759 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.759 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:03.759 { 00:18:03.759 "auth": { 00:18:03.759 "dhgroup": "ffdhe8192", 00:18:03.759 "digest": "sha256", 00:18:03.759 "state": "completed" 00:18:03.759 }, 00:18:03.759 "cntlid": 41, 00:18:03.759 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:18:03.759 "listen_address": { 00:18:03.759 "adrfam": "IPv4", 00:18:03.759 "traddr": "10.0.0.3", 00:18:03.759 "trsvcid": "4420", 00:18:03.759 "trtype": "TCP" 00:18:03.759 }, 00:18:03.759 "peer_address": { 00:18:03.759 "adrfam": "IPv4", 00:18:03.759 "traddr": "10.0.0.1", 00:18:03.759 "trsvcid": "37786", 00:18:03.759 "trtype": "TCP" 00:18:03.759 }, 00:18:03.759 "qid": 0, 00:18:03.759 "state": "enabled", 00:18:03.759 "thread": "nvmf_tgt_poll_group_000" 00:18:03.759 } 00:18:03.759 ]' 00:18:03.759 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:03.759 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:03.759 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:03.759 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:03.759 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:03.759 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.759 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.759 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.016 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDA3YTNlNWY1MTIxNDU1ZjUyNDViNjM3M2FjMmI4OTg4Nzc2MDI0NDliY2U1MGJisiJ9Hw==: --dhchap-ctrl-secret DHHC-1:03:ZTAwZGFiOTE0OWMxODdkYTlmMTIxNjBhOGQzYWQ2ZGIxMjE3NjI0ODM0OTY4OGUxMjljYWU1MTVkNjlmNDQ4N2pyDWw=: 00:18:04.016 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid 9ca3beaa-4765-4780-96a0-c04fd97e7435 -l 0 --dhchap-secret DHHC-1:00:ZDA3YTNlNWY1MTIxNDU1ZjUyNDViNjM3M2FjMmI4OTg4Nzc2MDI0NDliY2U1MGJisiJ9Hw==: --dhchap-ctrl-secret DHHC-1:03:ZTAwZGFiOTE0OWMxODdkYTlmMTIxNjBhOGQzYWQ2ZGIxMjE3NjI0ODM0OTY4OGUxMjljYWU1MTVkNjlmNDQ4N2pyDWw=: 00:18:04.616 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.616 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.616 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:18:04.616 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.616 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.616 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.616 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:04.616 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:04.616 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:04.873 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:18:04.873 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:04.873 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:04.873 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:04.873 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:04.873 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.873 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:04.873 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.873 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.873 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.873 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:04.873 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:04.873 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.438 00:18:05.438 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:05.438 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:05.438 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.696 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.696 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.696 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.696 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.696 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.696 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:05.696 { 00:18:05.696 "auth": { 00:18:05.696 "dhgroup": "ffdhe8192", 00:18:05.696 "digest": "sha256", 00:18:05.696 "state": "completed" 00:18:05.696 }, 00:18:05.696 "cntlid": 43, 00:18:05.696 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:18:05.696 "listen_address": { 00:18:05.696 "adrfam": "IPv4", 00:18:05.696 "traddr": "10.0.0.3", 00:18:05.696 "trsvcid": "4420", 00:18:05.696 "trtype": "TCP" 00:18:05.696 }, 00:18:05.696 "peer_address": { 00:18:05.696 "adrfam": "IPv4", 00:18:05.696 "traddr": "10.0.0.1", 00:18:05.696 "trsvcid": "37828", 00:18:05.696 "trtype": "TCP" 00:18:05.696 }, 00:18:05.696 "qid": 0, 00:18:05.696 "state": "enabled", 00:18:05.696 "thread": "nvmf_tgt_poll_group_000" 00:18:05.696 } 00:18:05.696 ]' 00:18:05.696 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:05.696 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:05.696 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:05.696 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:05.696 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:05.696 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.696 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.696 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.955 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTgyZDkxODc4ZTc5ZDU5MjY5MmViMWIyZmZmNGJhNjd47jto: --dhchap-ctrl-secret DHHC-1:02:ODFmNzk4OWZjOTY2OGZhZDUxMjE0NmY1MTBmOTc3ZjU0ZGRmNjFiYmM1YWMyMDQwWOc6bA==: 00:18:05.955 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid 9ca3beaa-4765-4780-96a0-c04fd97e7435 -l 0 --dhchap-secret DHHC-1:01:ZTgyZDkxODc4ZTc5ZDU5MjY5MmViMWIyZmZmNGJhNjd47jto: --dhchap-ctrl-secret DHHC-1:02:ODFmNzk4OWZjOTY2OGZhZDUxMjE0NmY1MTBmOTc3ZjU0ZGRmNjFiYmM1YWMyMDQwWOc6bA==: 00:18:06.521 14:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.521 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.521 14:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:18:06.521 14:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.521 14:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.521 14:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.521 14:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:06.521 14:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:06.521 14:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:06.778 14:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:18:06.778 14:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:06.778 14:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:06.778 14:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:06.778 14:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:06.778 14:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.778 14:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.778 14:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.778 14:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.778 14:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.778 14:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.778 14:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.778 14:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.377 00:18:07.377 14:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:07.377 14:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.377 14:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:07.636 14:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.636 14:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.636 14:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.636 14:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.636 14:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.636 14:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:07.636 { 00:18:07.636 "auth": { 00:18:07.636 "dhgroup": "ffdhe8192", 00:18:07.636 "digest": "sha256", 00:18:07.636 "state": "completed" 00:18:07.636 }, 00:18:07.636 "cntlid": 45, 00:18:07.636 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:18:07.636 "listen_address": { 00:18:07.636 "adrfam": "IPv4", 00:18:07.636 "traddr": "10.0.0.3", 00:18:07.636 "trsvcid": "4420", 00:18:07.636 "trtype": "TCP" 00:18:07.636 }, 00:18:07.636 "peer_address": { 00:18:07.636 "adrfam": "IPv4", 00:18:07.636 "traddr": "10.0.0.1", 00:18:07.636 "trsvcid": "37860", 00:18:07.636 "trtype": "TCP" 00:18:07.636 }, 00:18:07.636 "qid": 0, 00:18:07.636 "state": "enabled", 00:18:07.636 "thread": "nvmf_tgt_poll_group_000" 00:18:07.636 } 00:18:07.636 ]' 00:18:07.636 14:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:07.636 14:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:07.636 14:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:07.636 14:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:07.636 14:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:07.636 14:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.636 14:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.636 14:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.894 14:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODFiN2I5NWNmYjgwZDNmYzliYzJkYTU4N2ZhN2I3MzE5NzhlYjUxMjIxZTQwNmY5EVCk5g==: --dhchap-ctrl-secret DHHC-1:01:ZmZhMjcwOTUwNzNhMWI1MDkxNGRiMmQ3NTQ1ZTNhYTH5sdOn: 00:18:07.894 14:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid 9ca3beaa-4765-4780-96a0-c04fd97e7435 -l 0 --dhchap-secret DHHC-1:02:ODFiN2I5NWNmYjgwZDNmYzliYzJkYTU4N2ZhN2I3MzE5NzhlYjUxMjIxZTQwNmY5EVCk5g==: --dhchap-ctrl-secret DHHC-1:01:ZmZhMjcwOTUwNzNhMWI1MDkxNGRiMmQ3NTQ1ZTNhYTH5sdOn: 00:18:08.460 14:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.460 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.460 14:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:18:08.460 14:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.460 14:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.460 14:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.460 14:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:08.460 14:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:08.460 14:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:08.717 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:18:08.717 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:08.717 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:08.717 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:08.717 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:08.717 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.717 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key3 00:18:08.717 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.717 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.717 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.717 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:08.717 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:08.717 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:09.282 00:18:09.282 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:09.282 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.282 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:09.282 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.282 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.282 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.282 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.575 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.575 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:09.575 { 00:18:09.575 "auth": { 00:18:09.575 "dhgroup": "ffdhe8192", 00:18:09.575 "digest": "sha256", 00:18:09.575 "state": "completed" 00:18:09.575 }, 00:18:09.575 "cntlid": 47, 00:18:09.575 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:18:09.575 "listen_address": { 00:18:09.575 "adrfam": "IPv4", 00:18:09.575 "traddr": "10.0.0.3", 00:18:09.575 "trsvcid": "4420", 00:18:09.575 "trtype": "TCP" 00:18:09.575 }, 00:18:09.575 "peer_address": { 00:18:09.575 "adrfam": "IPv4", 00:18:09.575 "traddr": "10.0.0.1", 00:18:09.575 "trsvcid": "51390", 00:18:09.575 "trtype": "TCP" 00:18:09.575 }, 00:18:09.575 "qid": 0, 00:18:09.575 "state": "enabled", 00:18:09.575 "thread": "nvmf_tgt_poll_group_000" 00:18:09.575 } 00:18:09.575 ]' 00:18:09.575 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:09.575 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:09.575 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:09.575 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:09.575 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:09.575 14:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.575 14:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.575 14:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.575 14:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmM3YWI1MDhlMGY2ZjYwNDExYjcyNGIzMjYzNjkxZmIxMjY5YmE3NGViYzYwZjQyNjVkNGZkNjA3NTk4NWY5MVH/20U=: 00:18:09.575 14:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid 9ca3beaa-4765-4780-96a0-c04fd97e7435 -l 0 --dhchap-secret DHHC-1:03:ZmM3YWI1MDhlMGY2ZjYwNDExYjcyNGIzMjYzNjkxZmIxMjY5YmE3NGViYzYwZjQyNjVkNGZkNjA3NTk4NWY5MVH/20U=: 00:18:10.141 14:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.141 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.141 14:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:18:10.141 14:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.141 14:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.141 14:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.141 14:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:18:10.141 14:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:10.141 14:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:10.141 14:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:10.141 14:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:10.399 14:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:18:10.399 14:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:10.399 14:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:10.399 14:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:10.399 14:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:10.399 14:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.399 14:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.399 14:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.399 14:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.399 14:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.399 14:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.399 14:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.400 14:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.657 00:18:10.657 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:10.657 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:10.657 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.915 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.915 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.915 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.915 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.915 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.915 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:10.915 { 00:18:10.915 "auth": { 00:18:10.915 "dhgroup": "null", 00:18:10.915 "digest": "sha384", 00:18:10.915 "state": "completed" 00:18:10.915 }, 00:18:10.915 "cntlid": 49, 00:18:10.915 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:18:10.915 "listen_address": { 00:18:10.915 "adrfam": "IPv4", 00:18:10.915 "traddr": "10.0.0.3", 00:18:10.915 "trsvcid": "4420", 00:18:10.915 "trtype": "TCP" 00:18:10.915 }, 00:18:10.915 "peer_address": { 00:18:10.915 "adrfam": "IPv4", 00:18:10.915 "traddr": "10.0.0.1", 00:18:10.915 "trsvcid": "51428", 00:18:10.915 "trtype": "TCP" 00:18:10.915 }, 00:18:10.915 "qid": 0, 00:18:10.915 "state": "enabled", 00:18:10.915 "thread": "nvmf_tgt_poll_group_000" 00:18:10.915 } 00:18:10.915 ]' 00:18:10.915 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:10.915 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:10.915 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:10.915 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:10.915 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:10.915 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.915 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.915 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.173 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDA3YTNlNWY1MTIxNDU1ZjUyNDViNjM3M2FjMmI4OTg4Nzc2MDI0NDliY2U1MGJisiJ9Hw==: --dhchap-ctrl-secret DHHC-1:03:ZTAwZGFiOTE0OWMxODdkYTlmMTIxNjBhOGQzYWQ2ZGIxMjE3NjI0ODM0OTY4OGUxMjljYWU1MTVkNjlmNDQ4N2pyDWw=: 00:18:11.173 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid 9ca3beaa-4765-4780-96a0-c04fd97e7435 -l 0 --dhchap-secret DHHC-1:00:ZDA3YTNlNWY1MTIxNDU1ZjUyNDViNjM3M2FjMmI4OTg4Nzc2MDI0NDliY2U1MGJisiJ9Hw==: --dhchap-ctrl-secret DHHC-1:03:ZTAwZGFiOTE0OWMxODdkYTlmMTIxNjBhOGQzYWQ2ZGIxMjE3NjI0ODM0OTY4OGUxMjljYWU1MTVkNjlmNDQ4N2pyDWw=: 00:18:11.739 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.739 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.739 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:18:11.739 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.739 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.739 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.739 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:11.739 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:11.739 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:11.997 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:18:11.997 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:11.997 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:11.997 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:11.997 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:11.997 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.997 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.997 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.997 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.997 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.997 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.997 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.997 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.255 00:18:12.255 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:12.255 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:12.255 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.513 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.513 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.513 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.513 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.513 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.513 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:12.513 { 00:18:12.513 "auth": { 00:18:12.513 "dhgroup": "null", 00:18:12.513 "digest": "sha384", 00:18:12.513 "state": "completed" 00:18:12.513 }, 00:18:12.513 "cntlid": 51, 00:18:12.513 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:18:12.513 "listen_address": { 00:18:12.513 "adrfam": "IPv4", 00:18:12.513 "traddr": "10.0.0.3", 00:18:12.513 "trsvcid": "4420", 00:18:12.513 "trtype": "TCP" 00:18:12.513 }, 00:18:12.513 "peer_address": { 00:18:12.513 "adrfam": "IPv4", 00:18:12.513 "traddr": "10.0.0.1", 00:18:12.513 "trsvcid": "51444", 00:18:12.513 "trtype": "TCP" 00:18:12.513 }, 00:18:12.513 "qid": 0, 00:18:12.513 "state": "enabled", 00:18:12.513 "thread": "nvmf_tgt_poll_group_000" 00:18:12.513 } 00:18:12.513 ]' 00:18:12.513 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:12.513 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:12.513 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:12.513 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:12.513 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:12.513 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.513 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.513 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.772 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTgyZDkxODc4ZTc5ZDU5MjY5MmViMWIyZmZmNGJhNjd47jto: --dhchap-ctrl-secret DHHC-1:02:ODFmNzk4OWZjOTY2OGZhZDUxMjE0NmY1MTBmOTc3ZjU0ZGRmNjFiYmM1YWMyMDQwWOc6bA==: 00:18:12.772 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid 9ca3beaa-4765-4780-96a0-c04fd97e7435 -l 0 --dhchap-secret DHHC-1:01:ZTgyZDkxODc4ZTc5ZDU5MjY5MmViMWIyZmZmNGJhNjd47jto: --dhchap-ctrl-secret DHHC-1:02:ODFmNzk4OWZjOTY2OGZhZDUxMjE0NmY1MTBmOTc3ZjU0ZGRmNjFiYmM1YWMyMDQwWOc6bA==: 00:18:13.343 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.343 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.343 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:18:13.343 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.343 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.343 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.343 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:13.343 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:13.343 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:13.600 14:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:18:13.600 14:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:13.600 14:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:13.600 14:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:13.600 14:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:13.600 14:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.600 14:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.600 14:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.600 14:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.600 14:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.600 14:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.600 14:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.600 14:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.858 00:18:13.858 14:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:13.858 14:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.858 14:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:14.115 14:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.115 14:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.115 14:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.115 14:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.115 14:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.115 14:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:14.115 { 00:18:14.115 "auth": { 00:18:14.115 "dhgroup": "null", 00:18:14.115 "digest": "sha384", 00:18:14.115 "state": "completed" 00:18:14.115 }, 00:18:14.115 "cntlid": 53, 00:18:14.115 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:18:14.115 "listen_address": { 00:18:14.115 "adrfam": "IPv4", 00:18:14.115 "traddr": "10.0.0.3", 00:18:14.115 "trsvcid": "4420", 00:18:14.115 "trtype": "TCP" 00:18:14.115 }, 00:18:14.115 "peer_address": { 00:18:14.115 "adrfam": "IPv4", 00:18:14.115 "traddr": "10.0.0.1", 00:18:14.115 "trsvcid": "51470", 00:18:14.115 "trtype": "TCP" 00:18:14.115 }, 00:18:14.115 "qid": 0, 00:18:14.115 "state": "enabled", 00:18:14.115 "thread": "nvmf_tgt_poll_group_000" 00:18:14.115 } 00:18:14.115 ]' 00:18:14.115 14:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:14.115 14:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:14.115 14:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:14.115 14:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:14.115 14:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:14.115 14:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.115 14:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.115 14:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.372 14:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODFiN2I5NWNmYjgwZDNmYzliYzJkYTU4N2ZhN2I3MzE5NzhlYjUxMjIxZTQwNmY5EVCk5g==: --dhchap-ctrl-secret DHHC-1:01:ZmZhMjcwOTUwNzNhMWI1MDkxNGRiMmQ3NTQ1ZTNhYTH5sdOn: 00:18:14.372 14:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid 9ca3beaa-4765-4780-96a0-c04fd97e7435 -l 0 --dhchap-secret DHHC-1:02:ODFiN2I5NWNmYjgwZDNmYzliYzJkYTU4N2ZhN2I3MzE5NzhlYjUxMjIxZTQwNmY5EVCk5g==: --dhchap-ctrl-secret DHHC-1:01:ZmZhMjcwOTUwNzNhMWI1MDkxNGRiMmQ3NTQ1ZTNhYTH5sdOn: 00:18:14.937 14:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.937 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.937 14:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:18:14.937 14:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.937 14:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.937 14:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.937 14:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:14.937 14:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:14.937 14:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:15.195 14:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:18:15.195 14:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:15.195 14:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:15.195 14:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:15.195 14:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:15.195 14:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.195 14:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key3 00:18:15.195 14:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.195 14:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.195 14:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.195 14:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:15.195 14:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:15.195 14:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:15.452 00:18:15.452 14:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:15.452 14:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:15.452 14:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.709 14:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.709 14:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.709 14:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.709 14:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.709 14:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.709 14:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:15.709 { 00:18:15.709 "auth": { 00:18:15.709 "dhgroup": "null", 00:18:15.709 "digest": "sha384", 00:18:15.709 "state": "completed" 00:18:15.709 }, 00:18:15.709 "cntlid": 55, 00:18:15.709 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:18:15.709 "listen_address": { 00:18:15.709 "adrfam": "IPv4", 00:18:15.709 "traddr": "10.0.0.3", 00:18:15.709 "trsvcid": "4420", 00:18:15.709 "trtype": "TCP" 00:18:15.709 }, 00:18:15.709 "peer_address": { 00:18:15.709 "adrfam": "IPv4", 00:18:15.709 "traddr": "10.0.0.1", 00:18:15.709 "trsvcid": "51508", 00:18:15.709 "trtype": "TCP" 00:18:15.709 }, 00:18:15.709 "qid": 0, 00:18:15.709 "state": "enabled", 00:18:15.709 "thread": "nvmf_tgt_poll_group_000" 00:18:15.709 } 00:18:15.709 ]' 00:18:15.709 14:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:15.709 14:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:15.709 14:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:15.709 14:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:15.709 14:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:15.709 14:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.709 14:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.709 14:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.966 14:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmM3YWI1MDhlMGY2ZjYwNDExYjcyNGIzMjYzNjkxZmIxMjY5YmE3NGViYzYwZjQyNjVkNGZkNjA3NTk4NWY5MVH/20U=: 00:18:15.966 14:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid 9ca3beaa-4765-4780-96a0-c04fd97e7435 -l 0 --dhchap-secret DHHC-1:03:ZmM3YWI1MDhlMGY2ZjYwNDExYjcyNGIzMjYzNjkxZmIxMjY5YmE3NGViYzYwZjQyNjVkNGZkNjA3NTk4NWY5MVH/20U=: 00:18:16.546 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.546 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.546 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:18:16.546 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.546 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.546 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.546 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:16.546 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:16.546 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:16.546 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:16.804 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:18:16.804 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:16.804 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:16.804 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:16.804 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:16.804 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:16.804 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:16.804 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.804 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.804 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.804 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:16.804 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:16.804 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:17.062 00:18:17.062 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:17.062 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:17.062 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.320 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.320 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.320 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.320 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.320 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.320 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:17.320 { 00:18:17.320 "auth": { 00:18:17.320 "dhgroup": "ffdhe2048", 00:18:17.320 "digest": "sha384", 00:18:17.320 "state": "completed" 00:18:17.320 }, 00:18:17.320 "cntlid": 57, 00:18:17.320 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:18:17.320 "listen_address": { 00:18:17.320 "adrfam": "IPv4", 00:18:17.320 "traddr": "10.0.0.3", 00:18:17.320 "trsvcid": "4420", 00:18:17.320 "trtype": "TCP" 00:18:17.320 }, 00:18:17.320 "peer_address": { 00:18:17.320 "adrfam": "IPv4", 00:18:17.320 "traddr": "10.0.0.1", 00:18:17.320 "trsvcid": "51528", 00:18:17.320 "trtype": "TCP" 00:18:17.320 }, 00:18:17.320 "qid": 0, 00:18:17.320 "state": "enabled", 00:18:17.320 "thread": "nvmf_tgt_poll_group_000" 00:18:17.320 } 00:18:17.320 ]' 00:18:17.320 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:17.320 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:17.320 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:17.320 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:17.320 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:17.320 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.320 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.320 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.578 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDA3YTNlNWY1MTIxNDU1ZjUyNDViNjM3M2FjMmI4OTg4Nzc2MDI0NDliY2U1MGJisiJ9Hw==: --dhchap-ctrl-secret DHHC-1:03:ZTAwZGFiOTE0OWMxODdkYTlmMTIxNjBhOGQzYWQ2ZGIxMjE3NjI0ODM0OTY4OGUxMjljYWU1MTVkNjlmNDQ4N2pyDWw=: 00:18:17.578 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid 9ca3beaa-4765-4780-96a0-c04fd97e7435 -l 0 --dhchap-secret DHHC-1:00:ZDA3YTNlNWY1MTIxNDU1ZjUyNDViNjM3M2FjMmI4OTg4Nzc2MDI0NDliY2U1MGJisiJ9Hw==: --dhchap-ctrl-secret DHHC-1:03:ZTAwZGFiOTE0OWMxODdkYTlmMTIxNjBhOGQzYWQ2ZGIxMjE3NjI0ODM0OTY4OGUxMjljYWU1MTVkNjlmNDQ4N2pyDWw=: 00:18:18.143 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.143 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.143 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:18:18.143 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.143 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.143 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.143 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:18.143 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:18.143 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:18.401 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:18:18.401 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:18.401 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:18.401 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:18.401 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:18.401 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.401 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:18.401 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.401 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.401 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.401 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:18.401 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:18.401 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:18.659 00:18:18.659 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:18.659 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:18.659 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.917 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.917 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.917 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.917 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.917 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.917 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:18.917 { 00:18:18.917 "auth": { 00:18:18.917 "dhgroup": "ffdhe2048", 00:18:18.917 "digest": "sha384", 00:18:18.917 "state": "completed" 00:18:18.917 }, 00:18:18.917 "cntlid": 59, 00:18:18.917 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:18:18.917 "listen_address": { 00:18:18.917 "adrfam": "IPv4", 00:18:18.917 "traddr": "10.0.0.3", 00:18:18.917 "trsvcid": "4420", 00:18:18.917 "trtype": "TCP" 00:18:18.917 }, 00:18:18.917 "peer_address": { 00:18:18.917 "adrfam": "IPv4", 00:18:18.917 "traddr": "10.0.0.1", 00:18:18.917 "trsvcid": "54008", 00:18:18.917 "trtype": "TCP" 00:18:18.917 }, 00:18:18.917 "qid": 0, 00:18:18.917 "state": "enabled", 00:18:18.917 "thread": "nvmf_tgt_poll_group_000" 00:18:18.917 } 00:18:18.917 ]' 00:18:18.917 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:18.917 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:18.917 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:18.917 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:18.917 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:18.917 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.917 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.917 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.175 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTgyZDkxODc4ZTc5ZDU5MjY5MmViMWIyZmZmNGJhNjd47jto: --dhchap-ctrl-secret DHHC-1:02:ODFmNzk4OWZjOTY2OGZhZDUxMjE0NmY1MTBmOTc3ZjU0ZGRmNjFiYmM1YWMyMDQwWOc6bA==: 00:18:19.175 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid 9ca3beaa-4765-4780-96a0-c04fd97e7435 -l 0 --dhchap-secret DHHC-1:01:ZTgyZDkxODc4ZTc5ZDU5MjY5MmViMWIyZmZmNGJhNjd47jto: --dhchap-ctrl-secret DHHC-1:02:ODFmNzk4OWZjOTY2OGZhZDUxMjE0NmY1MTBmOTc3ZjU0ZGRmNjFiYmM1YWMyMDQwWOc6bA==: 00:18:19.778 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.778 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.778 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:18:19.778 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.778 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.778 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.778 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:19.778 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:19.778 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:20.036 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:18:20.036 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:20.036 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:20.036 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:20.036 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:20.036 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.036 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:20.036 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.036 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.036 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.036 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:20.036 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:20.036 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:20.294 00:18:20.294 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:20.294 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.294 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:20.553 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.553 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.553 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.553 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.553 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.553 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:20.553 { 00:18:20.553 "auth": { 00:18:20.553 "dhgroup": "ffdhe2048", 00:18:20.553 "digest": "sha384", 00:18:20.553 "state": "completed" 00:18:20.553 }, 00:18:20.553 "cntlid": 61, 00:18:20.553 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:18:20.553 "listen_address": { 00:18:20.553 "adrfam": "IPv4", 00:18:20.553 "traddr": "10.0.0.3", 00:18:20.553 "trsvcid": "4420", 00:18:20.553 "trtype": "TCP" 00:18:20.553 }, 00:18:20.553 "peer_address": { 00:18:20.553 "adrfam": "IPv4", 00:18:20.553 "traddr": "10.0.0.1", 00:18:20.553 "trsvcid": "54046", 00:18:20.553 "trtype": "TCP" 00:18:20.553 }, 00:18:20.553 "qid": 0, 00:18:20.553 "state": "enabled", 00:18:20.553 "thread": "nvmf_tgt_poll_group_000" 00:18:20.553 } 00:18:20.553 ]' 00:18:20.553 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:20.553 14:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:20.553 14:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:20.553 14:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:20.553 14:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:20.553 14:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.553 14:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.553 14:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.811 14:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODFiN2I5NWNmYjgwZDNmYzliYzJkYTU4N2ZhN2I3MzE5NzhlYjUxMjIxZTQwNmY5EVCk5g==: --dhchap-ctrl-secret DHHC-1:01:ZmZhMjcwOTUwNzNhMWI1MDkxNGRiMmQ3NTQ1ZTNhYTH5sdOn: 00:18:20.811 14:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid 9ca3beaa-4765-4780-96a0-c04fd97e7435 -l 0 --dhchap-secret DHHC-1:02:ODFiN2I5NWNmYjgwZDNmYzliYzJkYTU4N2ZhN2I3MzE5NzhlYjUxMjIxZTQwNmY5EVCk5g==: --dhchap-ctrl-secret DHHC-1:01:ZmZhMjcwOTUwNzNhMWI1MDkxNGRiMmQ3NTQ1ZTNhYTH5sdOn: 00:18:21.378 14:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.378 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.378 14:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:18:21.378 14:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.378 14:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.378 14:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.378 14:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:21.378 14:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:21.378 14:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:21.636 14:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:18:21.636 14:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:21.636 14:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:21.636 14:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:21.636 14:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:21.636 14:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:21.636 14:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key3 00:18:21.636 14:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.636 14:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.636 14:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.636 14:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:21.636 14:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:21.636 14:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:21.895 00:18:21.895 14:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:21.895 14:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.895 14:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:22.198 14:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.198 14:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.198 14:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.198 14:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.198 14:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.198 14:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:22.198 { 00:18:22.198 "auth": { 00:18:22.198 "dhgroup": "ffdhe2048", 00:18:22.198 "digest": "sha384", 00:18:22.198 "state": "completed" 00:18:22.198 }, 00:18:22.198 "cntlid": 63, 00:18:22.198 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:18:22.198 "listen_address": { 00:18:22.198 "adrfam": "IPv4", 00:18:22.198 "traddr": "10.0.0.3", 00:18:22.198 "trsvcid": "4420", 00:18:22.198 "trtype": "TCP" 00:18:22.198 }, 00:18:22.198 "peer_address": { 00:18:22.198 "adrfam": "IPv4", 00:18:22.198 "traddr": "10.0.0.1", 00:18:22.198 "trsvcid": "54064", 00:18:22.198 "trtype": "TCP" 00:18:22.198 }, 00:18:22.198 "qid": 0, 00:18:22.198 "state": "enabled", 00:18:22.198 "thread": "nvmf_tgt_poll_group_000" 00:18:22.198 } 00:18:22.198 ]' 00:18:22.198 14:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:22.198 14:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:22.198 14:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:22.198 14:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:22.198 14:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:22.198 14:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.198 14:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.198 14:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.198 14:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmM3YWI1MDhlMGY2ZjYwNDExYjcyNGIzMjYzNjkxZmIxMjY5YmE3NGViYzYwZjQyNjVkNGZkNjA3NTk4NWY5MVH/20U=: 00:18:22.198 14:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid 9ca3beaa-4765-4780-96a0-c04fd97e7435 -l 0 --dhchap-secret DHHC-1:03:ZmM3YWI1MDhlMGY2ZjYwNDExYjcyNGIzMjYzNjkxZmIxMjY5YmE3NGViYzYwZjQyNjVkNGZkNjA3NTk4NWY5MVH/20U=: 00:18:22.766 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.766 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.766 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:18:22.766 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.766 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.766 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.766 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:22.766 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:22.766 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:22.766 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:23.025 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:18:23.025 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:23.025 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:23.025 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:23.025 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:23.025 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.025 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:23.025 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.025 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.025 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.025 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:23.025 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:23.025 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:23.284 00:18:23.284 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:23.284 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.284 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:23.542 14:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.542 14:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:23.542 14:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.542 14:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.542 14:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.542 14:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:23.542 { 00:18:23.542 "auth": { 00:18:23.542 "dhgroup": "ffdhe3072", 00:18:23.542 "digest": "sha384", 00:18:23.542 "state": "completed" 00:18:23.542 }, 00:18:23.542 "cntlid": 65, 00:18:23.542 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:18:23.542 "listen_address": { 00:18:23.542 "adrfam": "IPv4", 00:18:23.542 "traddr": "10.0.0.3", 00:18:23.542 "trsvcid": "4420", 00:18:23.542 "trtype": "TCP" 00:18:23.542 }, 00:18:23.542 "peer_address": { 00:18:23.542 "adrfam": "IPv4", 00:18:23.542 "traddr": "10.0.0.1", 00:18:23.542 "trsvcid": "54092", 00:18:23.542 "trtype": "TCP" 00:18:23.542 }, 00:18:23.542 "qid": 0, 00:18:23.542 "state": "enabled", 00:18:23.542 "thread": "nvmf_tgt_poll_group_000" 00:18:23.542 } 00:18:23.542 ]' 00:18:23.542 14:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:23.542 14:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:23.542 14:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:23.542 14:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:23.542 14:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:23.542 14:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:23.542 14:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.542 14:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.799 14:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDA3YTNlNWY1MTIxNDU1ZjUyNDViNjM3M2FjMmI4OTg4Nzc2MDI0NDliY2U1MGJisiJ9Hw==: --dhchap-ctrl-secret DHHC-1:03:ZTAwZGFiOTE0OWMxODdkYTlmMTIxNjBhOGQzYWQ2ZGIxMjE3NjI0ODM0OTY4OGUxMjljYWU1MTVkNjlmNDQ4N2pyDWw=: 00:18:23.799 14:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid 9ca3beaa-4765-4780-96a0-c04fd97e7435 -l 0 --dhchap-secret DHHC-1:00:ZDA3YTNlNWY1MTIxNDU1ZjUyNDViNjM3M2FjMmI4OTg4Nzc2MDI0NDliY2U1MGJisiJ9Hw==: --dhchap-ctrl-secret DHHC-1:03:ZTAwZGFiOTE0OWMxODdkYTlmMTIxNjBhOGQzYWQ2ZGIxMjE3NjI0ODM0OTY4OGUxMjljYWU1MTVkNjlmNDQ4N2pyDWw=: 00:18:24.366 14:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.366 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.366 14:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:18:24.366 14:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.366 14:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.366 14:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.366 14:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:24.366 14:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:24.366 14:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:24.624 14:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:18:24.624 14:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:24.624 14:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:24.624 14:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:24.624 14:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:24.624 14:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.624 14:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:24.624 14:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.624 14:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.624 14:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.624 14:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:24.624 14:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:24.625 14:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:24.884 00:18:24.884 14:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:24.884 14:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:24.884 14:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.884 14:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.884 14:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.884 14:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.884 14:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.884 14:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.884 14:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:24.884 { 00:18:24.884 "auth": { 00:18:24.884 "dhgroup": "ffdhe3072", 00:18:24.884 "digest": "sha384", 00:18:24.884 "state": "completed" 00:18:24.884 }, 00:18:24.884 "cntlid": 67, 00:18:24.884 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:18:24.884 "listen_address": { 00:18:24.884 "adrfam": "IPv4", 00:18:24.884 "traddr": "10.0.0.3", 00:18:24.884 "trsvcid": "4420", 00:18:24.884 "trtype": "TCP" 00:18:24.884 }, 00:18:24.884 "peer_address": { 00:18:24.884 "adrfam": "IPv4", 00:18:24.884 "traddr": "10.0.0.1", 00:18:24.884 "trsvcid": "54120", 00:18:24.884 "trtype": "TCP" 00:18:24.884 }, 00:18:24.884 "qid": 0, 00:18:24.884 "state": "enabled", 00:18:24.885 "thread": "nvmf_tgt_poll_group_000" 00:18:24.885 } 00:18:24.885 ]' 00:18:24.885 14:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:25.143 14:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:25.143 14:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:25.143 14:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:25.143 14:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:25.143 14:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.143 14:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.143 14:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.400 14:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTgyZDkxODc4ZTc5ZDU5MjY5MmViMWIyZmZmNGJhNjd47jto: --dhchap-ctrl-secret DHHC-1:02:ODFmNzk4OWZjOTY2OGZhZDUxMjE0NmY1MTBmOTc3ZjU0ZGRmNjFiYmM1YWMyMDQwWOc6bA==: 00:18:25.401 14:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid 9ca3beaa-4765-4780-96a0-c04fd97e7435 -l 0 --dhchap-secret DHHC-1:01:ZTgyZDkxODc4ZTc5ZDU5MjY5MmViMWIyZmZmNGJhNjd47jto: --dhchap-ctrl-secret DHHC-1:02:ODFmNzk4OWZjOTY2OGZhZDUxMjE0NmY1MTBmOTc3ZjU0ZGRmNjFiYmM1YWMyMDQwWOc6bA==: 00:18:25.967 14:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.967 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.967 14:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:18:25.967 14:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.967 14:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.967 14:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.967 14:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:25.967 14:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:25.967 14:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:25.967 14:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:18:25.967 14:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:25.967 14:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:25.967 14:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:25.967 14:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:25.967 14:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:25.967 14:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:25.967 14:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.967 14:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.967 14:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.967 14:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:25.967 14:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:25.967 14:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:26.533 00:18:26.533 14:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:26.533 14:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.533 14:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:26.533 14:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.533 14:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.533 14:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.533 14:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.533 14:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.533 14:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:26.533 { 00:18:26.533 "auth": { 00:18:26.533 "dhgroup": "ffdhe3072", 00:18:26.533 "digest": "sha384", 00:18:26.533 "state": "completed" 00:18:26.533 }, 00:18:26.533 "cntlid": 69, 00:18:26.533 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:18:26.533 "listen_address": { 00:18:26.533 "adrfam": "IPv4", 00:18:26.533 "traddr": "10.0.0.3", 00:18:26.533 "trsvcid": "4420", 00:18:26.533 "trtype": "TCP" 00:18:26.533 }, 00:18:26.533 "peer_address": { 00:18:26.533 "adrfam": "IPv4", 00:18:26.533 "traddr": "10.0.0.1", 00:18:26.533 "trsvcid": "54156", 00:18:26.533 "trtype": "TCP" 00:18:26.533 }, 00:18:26.533 "qid": 0, 00:18:26.533 "state": "enabled", 00:18:26.533 "thread": "nvmf_tgt_poll_group_000" 00:18:26.533 } 00:18:26.533 ]' 00:18:26.533 14:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:26.792 14:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:26.792 14:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:26.792 14:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:26.792 14:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:26.792 14:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:26.792 14:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.792 14:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.052 14:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODFiN2I5NWNmYjgwZDNmYzliYzJkYTU4N2ZhN2I3MzE5NzhlYjUxMjIxZTQwNmY5EVCk5g==: --dhchap-ctrl-secret DHHC-1:01:ZmZhMjcwOTUwNzNhMWI1MDkxNGRiMmQ3NTQ1ZTNhYTH5sdOn: 00:18:27.052 14:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid 9ca3beaa-4765-4780-96a0-c04fd97e7435 -l 0 --dhchap-secret DHHC-1:02:ODFiN2I5NWNmYjgwZDNmYzliYzJkYTU4N2ZhN2I3MzE5NzhlYjUxMjIxZTQwNmY5EVCk5g==: --dhchap-ctrl-secret DHHC-1:01:ZmZhMjcwOTUwNzNhMWI1MDkxNGRiMmQ3NTQ1ZTNhYTH5sdOn: 00:18:27.622 14:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.622 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.622 14:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:18:27.622 14:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.622 14:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.622 14:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.622 14:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:27.622 14:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:27.622 14:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:27.880 14:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:18:27.880 14:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:27.880 14:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:27.880 14:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:27.880 14:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:27.880 14:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:27.880 14:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key3 00:18:27.880 14:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.880 14:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.880 14:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.880 14:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:27.880 14:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:27.880 14:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:28.138 00:18:28.138 14:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:28.138 14:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:28.138 14:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.396 14:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.396 14:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.396 14:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.396 14:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.396 14:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.396 14:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:28.396 { 00:18:28.396 "auth": { 00:18:28.396 "dhgroup": "ffdhe3072", 00:18:28.396 "digest": "sha384", 00:18:28.396 "state": "completed" 00:18:28.396 }, 00:18:28.396 "cntlid": 71, 00:18:28.396 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:18:28.396 "listen_address": { 00:18:28.396 "adrfam": "IPv4", 00:18:28.396 "traddr": "10.0.0.3", 00:18:28.396 "trsvcid": "4420", 00:18:28.396 "trtype": "TCP" 00:18:28.396 }, 00:18:28.396 "peer_address": { 00:18:28.396 "adrfam": "IPv4", 00:18:28.396 "traddr": "10.0.0.1", 00:18:28.396 "trsvcid": "36338", 00:18:28.396 "trtype": "TCP" 00:18:28.396 }, 00:18:28.396 "qid": 0, 00:18:28.396 "state": "enabled", 00:18:28.396 "thread": "nvmf_tgt_poll_group_000" 00:18:28.396 } 00:18:28.396 ]' 00:18:28.396 14:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:28.397 14:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:28.397 14:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:28.397 14:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:28.397 14:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:28.397 14:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.397 14:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.397 14:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.655 14:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmM3YWI1MDhlMGY2ZjYwNDExYjcyNGIzMjYzNjkxZmIxMjY5YmE3NGViYzYwZjQyNjVkNGZkNjA3NTk4NWY5MVH/20U=: 00:18:28.655 14:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid 9ca3beaa-4765-4780-96a0-c04fd97e7435 -l 0 --dhchap-secret DHHC-1:03:ZmM3YWI1MDhlMGY2ZjYwNDExYjcyNGIzMjYzNjkxZmIxMjY5YmE3NGViYzYwZjQyNjVkNGZkNjA3NTk4NWY5MVH/20U=: 00:18:29.222 14:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.222 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.222 14:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:18:29.222 14:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.222 14:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.222 14:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.222 14:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:29.222 14:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:29.222 14:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:29.222 14:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:29.222 14:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:18:29.222 14:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:29.222 14:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:29.222 14:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:29.222 14:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:29.222 14:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:29.222 14:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.222 14:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.222 14:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.222 14:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.222 14:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.222 14:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.222 14:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.485 00:18:29.485 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:29.485 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:29.485 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.743 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.743 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:29.743 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.743 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.743 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.743 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:29.743 { 00:18:29.743 "auth": { 00:18:29.743 "dhgroup": "ffdhe4096", 00:18:29.743 "digest": "sha384", 00:18:29.743 "state": "completed" 00:18:29.743 }, 00:18:29.743 "cntlid": 73, 00:18:29.743 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:18:29.743 "listen_address": { 00:18:29.743 "adrfam": "IPv4", 00:18:29.743 "traddr": "10.0.0.3", 00:18:29.743 "trsvcid": "4420", 00:18:29.743 "trtype": "TCP" 00:18:29.743 }, 00:18:29.743 "peer_address": { 00:18:29.743 "adrfam": "IPv4", 00:18:29.743 "traddr": "10.0.0.1", 00:18:29.743 "trsvcid": "36370", 00:18:29.743 "trtype": "TCP" 00:18:29.743 }, 00:18:29.743 "qid": 0, 00:18:29.743 "state": "enabled", 00:18:29.743 "thread": "nvmf_tgt_poll_group_000" 00:18:29.743 } 00:18:29.743 ]' 00:18:29.743 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:29.743 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:29.743 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:30.001 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:30.001 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:30.001 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.001 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.001 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.001 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDA3YTNlNWY1MTIxNDU1ZjUyNDViNjM3M2FjMmI4OTg4Nzc2MDI0NDliY2U1MGJisiJ9Hw==: --dhchap-ctrl-secret DHHC-1:03:ZTAwZGFiOTE0OWMxODdkYTlmMTIxNjBhOGQzYWQ2ZGIxMjE3NjI0ODM0OTY4OGUxMjljYWU1MTVkNjlmNDQ4N2pyDWw=: 00:18:30.002 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid 9ca3beaa-4765-4780-96a0-c04fd97e7435 -l 0 --dhchap-secret DHHC-1:00:ZDA3YTNlNWY1MTIxNDU1ZjUyNDViNjM3M2FjMmI4OTg4Nzc2MDI0NDliY2U1MGJisiJ9Hw==: --dhchap-ctrl-secret DHHC-1:03:ZTAwZGFiOTE0OWMxODdkYTlmMTIxNjBhOGQzYWQ2ZGIxMjE3NjI0ODM0OTY4OGUxMjljYWU1MTVkNjlmNDQ4N2pyDWw=: 00:18:30.934 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:30.934 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:30.934 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:18:30.934 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.934 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.934 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.934 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:30.934 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:30.934 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:30.934 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:18:30.934 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:30.934 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:30.934 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:30.934 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:30.934 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:30.934 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.934 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.934 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.934 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.934 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.934 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.934 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.192 00:18:31.192 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:31.192 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:31.192 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.450 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.450 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:31.450 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.450 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.450 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.450 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:31.450 { 00:18:31.450 "auth": { 00:18:31.450 "dhgroup": "ffdhe4096", 00:18:31.450 "digest": "sha384", 00:18:31.450 "state": "completed" 00:18:31.450 }, 00:18:31.450 "cntlid": 75, 00:18:31.450 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:18:31.450 "listen_address": { 00:18:31.450 "adrfam": "IPv4", 00:18:31.450 "traddr": "10.0.0.3", 00:18:31.450 "trsvcid": "4420", 00:18:31.450 "trtype": "TCP" 00:18:31.450 }, 00:18:31.450 "peer_address": { 00:18:31.450 "adrfam": "IPv4", 00:18:31.450 "traddr": "10.0.0.1", 00:18:31.450 "trsvcid": "36388", 00:18:31.450 "trtype": "TCP" 00:18:31.450 }, 00:18:31.450 "qid": 0, 00:18:31.450 "state": "enabled", 00:18:31.450 "thread": "nvmf_tgt_poll_group_000" 00:18:31.450 } 00:18:31.450 ]' 00:18:31.450 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:31.450 14:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:31.450 14:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:31.450 14:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:31.450 14:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:31.450 14:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:31.450 14:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.450 14:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.708 14:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTgyZDkxODc4ZTc5ZDU5MjY5MmViMWIyZmZmNGJhNjd47jto: --dhchap-ctrl-secret DHHC-1:02:ODFmNzk4OWZjOTY2OGZhZDUxMjE0NmY1MTBmOTc3ZjU0ZGRmNjFiYmM1YWMyMDQwWOc6bA==: 00:18:31.708 14:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid 9ca3beaa-4765-4780-96a0-c04fd97e7435 -l 0 --dhchap-secret DHHC-1:01:ZTgyZDkxODc4ZTc5ZDU5MjY5MmViMWIyZmZmNGJhNjd47jto: --dhchap-ctrl-secret DHHC-1:02:ODFmNzk4OWZjOTY2OGZhZDUxMjE0NmY1MTBmOTc3ZjU0ZGRmNjFiYmM1YWMyMDQwWOc6bA==: 00:18:32.312 14:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:32.312 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:32.312 14:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:18:32.312 14:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.312 14:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.312 14:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.312 14:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:32.312 14:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:32.312 14:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:32.570 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:18:32.570 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:32.570 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:32.570 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:32.570 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:32.570 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:32.570 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:32.570 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.570 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.570 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.570 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:32.570 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:32.570 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:32.829 00:18:32.829 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:32.829 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:32.829 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.088 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.088 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.088 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.088 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.088 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.088 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:33.088 { 00:18:33.088 "auth": { 00:18:33.088 "dhgroup": "ffdhe4096", 00:18:33.088 "digest": "sha384", 00:18:33.088 "state": "completed" 00:18:33.088 }, 00:18:33.088 "cntlid": 77, 00:18:33.088 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:18:33.088 "listen_address": { 00:18:33.088 "adrfam": "IPv4", 00:18:33.088 "traddr": "10.0.0.3", 00:18:33.088 "trsvcid": "4420", 00:18:33.088 "trtype": "TCP" 00:18:33.088 }, 00:18:33.088 "peer_address": { 00:18:33.088 "adrfam": "IPv4", 00:18:33.088 "traddr": "10.0.0.1", 00:18:33.088 "trsvcid": "36424", 00:18:33.088 "trtype": "TCP" 00:18:33.088 }, 00:18:33.088 "qid": 0, 00:18:33.088 "state": "enabled", 00:18:33.088 "thread": "nvmf_tgt_poll_group_000" 00:18:33.088 } 00:18:33.088 ]' 00:18:33.088 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:33.088 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:33.088 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:33.346 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:33.346 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:33.346 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:33.346 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:33.346 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.605 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODFiN2I5NWNmYjgwZDNmYzliYzJkYTU4N2ZhN2I3MzE5NzhlYjUxMjIxZTQwNmY5EVCk5g==: --dhchap-ctrl-secret DHHC-1:01:ZmZhMjcwOTUwNzNhMWI1MDkxNGRiMmQ3NTQ1ZTNhYTH5sdOn: 00:18:33.605 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid 9ca3beaa-4765-4780-96a0-c04fd97e7435 -l 0 --dhchap-secret DHHC-1:02:ODFiN2I5NWNmYjgwZDNmYzliYzJkYTU4N2ZhN2I3MzE5NzhlYjUxMjIxZTQwNmY5EVCk5g==: --dhchap-ctrl-secret DHHC-1:01:ZmZhMjcwOTUwNzNhMWI1MDkxNGRiMmQ3NTQ1ZTNhYTH5sdOn: 00:18:34.171 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:34.171 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:34.171 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:18:34.171 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.171 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.171 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.171 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:34.171 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:34.171 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:34.430 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:18:34.430 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:34.430 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:34.430 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:34.430 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:34.430 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.430 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key3 00:18:34.430 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.430 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.430 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.430 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:34.430 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:34.430 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:34.688 00:18:34.689 14:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:34.689 14:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.689 14:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:34.947 14:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.947 14:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:34.947 14:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.947 14:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.947 14:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.947 14:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:34.947 { 00:18:34.947 "auth": { 00:18:34.947 "dhgroup": "ffdhe4096", 00:18:34.947 "digest": "sha384", 00:18:34.947 "state": "completed" 00:18:34.947 }, 00:18:34.947 "cntlid": 79, 00:18:34.947 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:18:34.947 "listen_address": { 00:18:34.947 "adrfam": "IPv4", 00:18:34.947 "traddr": "10.0.0.3", 00:18:34.947 "trsvcid": "4420", 00:18:34.947 "trtype": "TCP" 00:18:34.947 }, 00:18:34.947 "peer_address": { 00:18:34.947 "adrfam": "IPv4", 00:18:34.947 "traddr": "10.0.0.1", 00:18:34.947 "trsvcid": "36440", 00:18:34.947 "trtype": "TCP" 00:18:34.947 }, 00:18:34.947 "qid": 0, 00:18:34.947 "state": "enabled", 00:18:34.947 "thread": "nvmf_tgt_poll_group_000" 00:18:34.947 } 00:18:34.947 ]' 00:18:34.947 14:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:34.947 14:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:34.947 14:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:34.947 14:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:34.947 14:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:34.947 14:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.947 14:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.947 14:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.206 14:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmM3YWI1MDhlMGY2ZjYwNDExYjcyNGIzMjYzNjkxZmIxMjY5YmE3NGViYzYwZjQyNjVkNGZkNjA3NTk4NWY5MVH/20U=: 00:18:35.206 14:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid 9ca3beaa-4765-4780-96a0-c04fd97e7435 -l 0 --dhchap-secret DHHC-1:03:ZmM3YWI1MDhlMGY2ZjYwNDExYjcyNGIzMjYzNjkxZmIxMjY5YmE3NGViYzYwZjQyNjVkNGZkNjA3NTk4NWY5MVH/20U=: 00:18:35.771 14:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.771 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.771 14:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:18:35.771 14:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.771 14:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.771 14:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.771 14:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:35.771 14:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:35.771 14:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:35.771 14:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:36.029 14:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:18:36.029 14:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:36.029 14:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:36.029 14:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:36.029 14:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:36.029 14:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:36.029 14:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.029 14:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.029 14:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.029 14:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.029 14:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.029 14:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.029 14:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.287 00:18:36.287 14:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:36.287 14:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:36.287 14:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.545 14:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.545 14:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.545 14:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.545 14:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.545 14:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.545 14:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:36.545 { 00:18:36.545 "auth": { 00:18:36.545 "dhgroup": "ffdhe6144", 00:18:36.545 "digest": "sha384", 00:18:36.545 "state": "completed" 00:18:36.545 }, 00:18:36.545 "cntlid": 81, 00:18:36.545 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:18:36.545 "listen_address": { 00:18:36.545 "adrfam": "IPv4", 00:18:36.545 "traddr": "10.0.0.3", 00:18:36.545 "trsvcid": "4420", 00:18:36.545 "trtype": "TCP" 00:18:36.545 }, 00:18:36.545 "peer_address": { 00:18:36.545 "adrfam": "IPv4", 00:18:36.545 "traddr": "10.0.0.1", 00:18:36.545 "trsvcid": "36462", 00:18:36.545 "trtype": "TCP" 00:18:36.545 }, 00:18:36.545 "qid": 0, 00:18:36.545 "state": "enabled", 00:18:36.545 "thread": "nvmf_tgt_poll_group_000" 00:18:36.545 } 00:18:36.545 ]' 00:18:36.545 14:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:36.545 14:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:36.545 14:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:36.545 14:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:36.545 14:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:36.803 14:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.803 14:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.803 14:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.060 14:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDA3YTNlNWY1MTIxNDU1ZjUyNDViNjM3M2FjMmI4OTg4Nzc2MDI0NDliY2U1MGJisiJ9Hw==: --dhchap-ctrl-secret DHHC-1:03:ZTAwZGFiOTE0OWMxODdkYTlmMTIxNjBhOGQzYWQ2ZGIxMjE3NjI0ODM0OTY4OGUxMjljYWU1MTVkNjlmNDQ4N2pyDWw=: 00:18:37.060 14:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid 9ca3beaa-4765-4780-96a0-c04fd97e7435 -l 0 --dhchap-secret DHHC-1:00:ZDA3YTNlNWY1MTIxNDU1ZjUyNDViNjM3M2FjMmI4OTg4Nzc2MDI0NDliY2U1MGJisiJ9Hw==: --dhchap-ctrl-secret DHHC-1:03:ZTAwZGFiOTE0OWMxODdkYTlmMTIxNjBhOGQzYWQ2ZGIxMjE3NjI0ODM0OTY4OGUxMjljYWU1MTVkNjlmNDQ4N2pyDWw=: 00:18:37.626 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:37.626 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:37.626 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:18:37.626 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.626 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.626 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.626 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:37.626 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:37.626 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:37.886 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:18:37.886 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:37.886 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:37.886 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:37.886 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:37.886 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:37.886 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:37.886 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.886 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.886 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.886 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:37.886 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:37.886 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:38.144 00:18:38.144 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:38.144 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:38.144 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.402 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.402 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:38.402 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.402 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.402 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.402 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:38.402 { 00:18:38.402 "auth": { 00:18:38.402 "dhgroup": "ffdhe6144", 00:18:38.402 "digest": "sha384", 00:18:38.402 "state": "completed" 00:18:38.402 }, 00:18:38.402 "cntlid": 83, 00:18:38.402 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:18:38.402 "listen_address": { 00:18:38.402 "adrfam": "IPv4", 00:18:38.402 "traddr": "10.0.0.3", 00:18:38.402 "trsvcid": "4420", 00:18:38.402 "trtype": "TCP" 00:18:38.402 }, 00:18:38.402 "peer_address": { 00:18:38.403 "adrfam": "IPv4", 00:18:38.403 "traddr": "10.0.0.1", 00:18:38.403 "trsvcid": "39910", 00:18:38.403 "trtype": "TCP" 00:18:38.403 }, 00:18:38.403 "qid": 0, 00:18:38.403 "state": "enabled", 00:18:38.403 "thread": "nvmf_tgt_poll_group_000" 00:18:38.403 } 00:18:38.403 ]' 00:18:38.403 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:38.403 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:38.403 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:38.403 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:38.403 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:38.403 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:38.403 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.403 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.660 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTgyZDkxODc4ZTc5ZDU5MjY5MmViMWIyZmZmNGJhNjd47jto: --dhchap-ctrl-secret DHHC-1:02:ODFmNzk4OWZjOTY2OGZhZDUxMjE0NmY1MTBmOTc3ZjU0ZGRmNjFiYmM1YWMyMDQwWOc6bA==: 00:18:38.661 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid 9ca3beaa-4765-4780-96a0-c04fd97e7435 -l 0 --dhchap-secret DHHC-1:01:ZTgyZDkxODc4ZTc5ZDU5MjY5MmViMWIyZmZmNGJhNjd47jto: --dhchap-ctrl-secret DHHC-1:02:ODFmNzk4OWZjOTY2OGZhZDUxMjE0NmY1MTBmOTc3ZjU0ZGRmNjFiYmM1YWMyMDQwWOc6bA==: 00:18:39.594 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:39.594 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:39.594 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:18:39.594 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.595 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.595 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.595 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:39.595 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:39.595 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:39.595 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:18:39.595 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:39.595 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:39.595 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:39.595 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:39.595 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:39.595 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:39.595 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.595 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.595 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.595 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:39.595 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:39.595 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:40.159 00:18:40.159 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:40.159 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:40.160 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.160 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.160 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:40.160 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.160 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.160 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.160 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:40.160 { 00:18:40.160 "auth": { 00:18:40.160 "dhgroup": "ffdhe6144", 00:18:40.160 "digest": "sha384", 00:18:40.160 "state": "completed" 00:18:40.160 }, 00:18:40.160 "cntlid": 85, 00:18:40.160 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:18:40.160 "listen_address": { 00:18:40.160 "adrfam": "IPv4", 00:18:40.160 "traddr": "10.0.0.3", 00:18:40.160 "trsvcid": "4420", 00:18:40.160 "trtype": "TCP" 00:18:40.160 }, 00:18:40.160 "peer_address": { 00:18:40.160 "adrfam": "IPv4", 00:18:40.160 "traddr": "10.0.0.1", 00:18:40.160 "trsvcid": "39942", 00:18:40.160 "trtype": "TCP" 00:18:40.160 }, 00:18:40.160 "qid": 0, 00:18:40.160 "state": "enabled", 00:18:40.160 "thread": "nvmf_tgt_poll_group_000" 00:18:40.160 } 00:18:40.160 ]' 00:18:40.160 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:40.160 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:40.160 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:40.417 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:40.417 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:40.418 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:40.418 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.418 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:40.418 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODFiN2I5NWNmYjgwZDNmYzliYzJkYTU4N2ZhN2I3MzE5NzhlYjUxMjIxZTQwNmY5EVCk5g==: --dhchap-ctrl-secret DHHC-1:01:ZmZhMjcwOTUwNzNhMWI1MDkxNGRiMmQ3NTQ1ZTNhYTH5sdOn: 00:18:40.418 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid 9ca3beaa-4765-4780-96a0-c04fd97e7435 -l 0 --dhchap-secret DHHC-1:02:ODFiN2I5NWNmYjgwZDNmYzliYzJkYTU4N2ZhN2I3MzE5NzhlYjUxMjIxZTQwNmY5EVCk5g==: --dhchap-ctrl-secret DHHC-1:01:ZmZhMjcwOTUwNzNhMWI1MDkxNGRiMmQ3NTQ1ZTNhYTH5sdOn: 00:18:41.359 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:41.359 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:41.359 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:18:41.359 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.359 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.359 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.360 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:41.360 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:41.360 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:41.360 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:18:41.360 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:41.360 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:41.360 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:41.360 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:41.360 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:41.360 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key3 00:18:41.360 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.360 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.360 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.360 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:41.360 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:41.360 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:41.926 00:18:41.926 14:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:41.926 14:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.926 14:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:42.185 14:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.185 14:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:42.185 14:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.185 14:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.185 14:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.185 14:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:42.185 { 00:18:42.185 "auth": { 00:18:42.185 "dhgroup": "ffdhe6144", 00:18:42.185 "digest": "sha384", 00:18:42.185 "state": "completed" 00:18:42.185 }, 00:18:42.185 "cntlid": 87, 00:18:42.185 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:18:42.185 "listen_address": { 00:18:42.185 "adrfam": "IPv4", 00:18:42.185 "traddr": "10.0.0.3", 00:18:42.185 "trsvcid": "4420", 00:18:42.185 "trtype": "TCP" 00:18:42.185 }, 00:18:42.185 "peer_address": { 00:18:42.185 "adrfam": "IPv4", 00:18:42.185 "traddr": "10.0.0.1", 00:18:42.185 "trsvcid": "39986", 00:18:42.185 "trtype": "TCP" 00:18:42.185 }, 00:18:42.185 "qid": 0, 00:18:42.185 "state": "enabled", 00:18:42.185 "thread": "nvmf_tgt_poll_group_000" 00:18:42.185 } 00:18:42.185 ]' 00:18:42.185 14:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:42.185 14:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:42.185 14:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:42.185 14:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:42.185 14:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:42.185 14:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:42.185 14:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.185 14:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.442 14:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmM3YWI1MDhlMGY2ZjYwNDExYjcyNGIzMjYzNjkxZmIxMjY5YmE3NGViYzYwZjQyNjVkNGZkNjA3NTk4NWY5MVH/20U=: 00:18:42.442 14:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid 9ca3beaa-4765-4780-96a0-c04fd97e7435 -l 0 --dhchap-secret DHHC-1:03:ZmM3YWI1MDhlMGY2ZjYwNDExYjcyNGIzMjYzNjkxZmIxMjY5YmE3NGViYzYwZjQyNjVkNGZkNjA3NTk4NWY5MVH/20U=: 00:18:43.008 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:43.008 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:43.008 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:18:43.008 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.008 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.008 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.008 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:43.008 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:43.008 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:43.008 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:43.266 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:18:43.266 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:43.266 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:43.266 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:43.266 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:43.266 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:43.266 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.266 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.266 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.266 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.266 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.266 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.266 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.832 00:18:43.832 14:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:43.832 14:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:43.832 14:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.090 14:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.090 14:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:44.090 14:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.090 14:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.090 14:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.090 14:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:44.090 { 00:18:44.090 "auth": { 00:18:44.090 "dhgroup": "ffdhe8192", 00:18:44.090 "digest": "sha384", 00:18:44.090 "state": "completed" 00:18:44.090 }, 00:18:44.090 "cntlid": 89, 00:18:44.090 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:18:44.090 "listen_address": { 00:18:44.090 "adrfam": "IPv4", 00:18:44.090 "traddr": "10.0.0.3", 00:18:44.090 "trsvcid": "4420", 00:18:44.090 "trtype": "TCP" 00:18:44.090 }, 00:18:44.090 "peer_address": { 00:18:44.090 "adrfam": "IPv4", 00:18:44.090 "traddr": "10.0.0.1", 00:18:44.090 "trsvcid": "40012", 00:18:44.090 "trtype": "TCP" 00:18:44.090 }, 00:18:44.090 "qid": 0, 00:18:44.090 "state": "enabled", 00:18:44.090 "thread": "nvmf_tgt_poll_group_000" 00:18:44.090 } 00:18:44.090 ]' 00:18:44.090 14:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:44.090 14:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:44.090 14:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:44.090 14:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:44.090 14:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:44.090 14:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:44.090 14:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:44.090 14:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.348 14:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDA3YTNlNWY1MTIxNDU1ZjUyNDViNjM3M2FjMmI4OTg4Nzc2MDI0NDliY2U1MGJisiJ9Hw==: --dhchap-ctrl-secret DHHC-1:03:ZTAwZGFiOTE0OWMxODdkYTlmMTIxNjBhOGQzYWQ2ZGIxMjE3NjI0ODM0OTY4OGUxMjljYWU1MTVkNjlmNDQ4N2pyDWw=: 00:18:44.348 14:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid 9ca3beaa-4765-4780-96a0-c04fd97e7435 -l 0 --dhchap-secret DHHC-1:00:ZDA3YTNlNWY1MTIxNDU1ZjUyNDViNjM3M2FjMmI4OTg4Nzc2MDI0NDliY2U1MGJisiJ9Hw==: --dhchap-ctrl-secret DHHC-1:03:ZTAwZGFiOTE0OWMxODdkYTlmMTIxNjBhOGQzYWQ2ZGIxMjE3NjI0ODM0OTY4OGUxMjljYWU1MTVkNjlmNDQ4N2pyDWw=: 00:18:44.915 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.915 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.915 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:18:44.915 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.915 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.915 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.915 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:44.915 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:44.915 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:45.174 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:18:45.174 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:45.174 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:45.174 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:45.174 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:45.174 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:45.174 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:45.174 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.174 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.174 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.174 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:45.174 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:45.174 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:45.742 00:18:45.742 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:45.742 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.742 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:45.742 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.742 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.742 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.742 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.742 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.742 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:45.742 { 00:18:45.742 "auth": { 00:18:45.742 "dhgroup": "ffdhe8192", 00:18:45.742 "digest": "sha384", 00:18:45.742 "state": "completed" 00:18:45.742 }, 00:18:45.742 "cntlid": 91, 00:18:45.742 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:18:45.742 "listen_address": { 00:18:45.742 "adrfam": "IPv4", 00:18:45.742 "traddr": "10.0.0.3", 00:18:45.742 "trsvcid": "4420", 00:18:45.742 "trtype": "TCP" 00:18:45.742 }, 00:18:45.742 "peer_address": { 00:18:45.742 "adrfam": "IPv4", 00:18:45.742 "traddr": "10.0.0.1", 00:18:45.742 "trsvcid": "40028", 00:18:45.742 "trtype": "TCP" 00:18:45.742 }, 00:18:45.742 "qid": 0, 00:18:45.742 "state": "enabled", 00:18:45.742 "thread": "nvmf_tgt_poll_group_000" 00:18:45.742 } 00:18:45.742 ]' 00:18:45.742 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:46.000 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:46.000 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:46.001 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:46.001 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:46.001 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:46.001 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.001 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.260 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTgyZDkxODc4ZTc5ZDU5MjY5MmViMWIyZmZmNGJhNjd47jto: --dhchap-ctrl-secret DHHC-1:02:ODFmNzk4OWZjOTY2OGZhZDUxMjE0NmY1MTBmOTc3ZjU0ZGRmNjFiYmM1YWMyMDQwWOc6bA==: 00:18:46.260 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid 9ca3beaa-4765-4780-96a0-c04fd97e7435 -l 0 --dhchap-secret DHHC-1:01:ZTgyZDkxODc4ZTc5ZDU5MjY5MmViMWIyZmZmNGJhNjd47jto: --dhchap-ctrl-secret DHHC-1:02:ODFmNzk4OWZjOTY2OGZhZDUxMjE0NmY1MTBmOTc3ZjU0ZGRmNjFiYmM1YWMyMDQwWOc6bA==: 00:18:46.828 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.828 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.828 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:18:46.828 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.828 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.828 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.828 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:46.828 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:46.828 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:46.828 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:18:46.828 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:46.828 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:47.087 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:47.087 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:47.087 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:47.087 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:47.087 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.087 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.087 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.087 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:47.087 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:47.087 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:47.653 00:18:47.653 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:47.653 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.653 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:47.653 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.653 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.653 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.653 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.653 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.653 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:47.653 { 00:18:47.653 "auth": { 00:18:47.653 "dhgroup": "ffdhe8192", 00:18:47.653 "digest": "sha384", 00:18:47.653 "state": "completed" 00:18:47.653 }, 00:18:47.653 "cntlid": 93, 00:18:47.653 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:18:47.653 "listen_address": { 00:18:47.653 "adrfam": "IPv4", 00:18:47.653 "traddr": "10.0.0.3", 00:18:47.653 "trsvcid": "4420", 00:18:47.653 "trtype": "TCP" 00:18:47.653 }, 00:18:47.653 "peer_address": { 00:18:47.653 "adrfam": "IPv4", 00:18:47.653 "traddr": "10.0.0.1", 00:18:47.653 "trsvcid": "40048", 00:18:47.653 "trtype": "TCP" 00:18:47.653 }, 00:18:47.653 "qid": 0, 00:18:47.653 "state": "enabled", 00:18:47.653 "thread": "nvmf_tgt_poll_group_000" 00:18:47.653 } 00:18:47.653 ]' 00:18:47.653 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:47.653 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:47.653 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:47.653 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:47.653 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:47.912 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.912 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.912 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.912 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODFiN2I5NWNmYjgwZDNmYzliYzJkYTU4N2ZhN2I3MzE5NzhlYjUxMjIxZTQwNmY5EVCk5g==: --dhchap-ctrl-secret DHHC-1:01:ZmZhMjcwOTUwNzNhMWI1MDkxNGRiMmQ3NTQ1ZTNhYTH5sdOn: 00:18:47.912 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid 9ca3beaa-4765-4780-96a0-c04fd97e7435 -l 0 --dhchap-secret DHHC-1:02:ODFiN2I5NWNmYjgwZDNmYzliYzJkYTU4N2ZhN2I3MzE5NzhlYjUxMjIxZTQwNmY5EVCk5g==: --dhchap-ctrl-secret DHHC-1:01:ZmZhMjcwOTUwNzNhMWI1MDkxNGRiMmQ3NTQ1ZTNhYTH5sdOn: 00:18:48.477 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.735 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.735 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:18:48.735 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.736 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.736 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.736 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:48.736 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:48.736 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:48.736 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:18:48.736 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:48.736 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:48.736 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:48.736 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:48.736 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:48.736 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key3 00:18:48.736 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.736 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.736 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.736 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:48.736 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:48.736 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:49.303 00:18:49.303 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:49.303 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:49.303 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.561 14:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.561 14:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:49.561 14:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.561 14:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.561 14:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.561 14:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:49.561 { 00:18:49.561 "auth": { 00:18:49.561 "dhgroup": "ffdhe8192", 00:18:49.561 "digest": "sha384", 00:18:49.561 "state": "completed" 00:18:49.561 }, 00:18:49.561 "cntlid": 95, 00:18:49.561 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:18:49.561 "listen_address": { 00:18:49.561 "adrfam": "IPv4", 00:18:49.561 "traddr": "10.0.0.3", 00:18:49.561 "trsvcid": "4420", 00:18:49.561 "trtype": "TCP" 00:18:49.561 }, 00:18:49.561 "peer_address": { 00:18:49.561 "adrfam": "IPv4", 00:18:49.561 "traddr": "10.0.0.1", 00:18:49.561 "trsvcid": "37268", 00:18:49.561 "trtype": "TCP" 00:18:49.561 }, 00:18:49.561 "qid": 0, 00:18:49.561 "state": "enabled", 00:18:49.561 "thread": "nvmf_tgt_poll_group_000" 00:18:49.561 } 00:18:49.561 ]' 00:18:49.561 14:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:49.561 14:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:49.561 14:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:49.561 14:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:49.561 14:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:49.561 14:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.561 14:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.561 14:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.819 14:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmM3YWI1MDhlMGY2ZjYwNDExYjcyNGIzMjYzNjkxZmIxMjY5YmE3NGViYzYwZjQyNjVkNGZkNjA3NTk4NWY5MVH/20U=: 00:18:49.819 14:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid 9ca3beaa-4765-4780-96a0-c04fd97e7435 -l 0 --dhchap-secret DHHC-1:03:ZmM3YWI1MDhlMGY2ZjYwNDExYjcyNGIzMjYzNjkxZmIxMjY5YmE3NGViYzYwZjQyNjVkNGZkNjA3NTk4NWY5MVH/20U=: 00:18:50.387 14:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.387 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.387 14:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:18:50.387 14:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.387 14:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.387 14:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.387 14:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:18:50.387 14:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:50.387 14:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:50.387 14:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:50.387 14:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:50.646 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:18:50.646 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:50.646 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:50.646 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:50.647 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:50.647 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:50.647 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:50.647 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.647 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.647 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.647 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:50.647 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:50.647 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:50.904 00:18:50.904 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:50.904 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:50.904 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.163 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.163 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.163 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.163 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.163 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.163 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:51.163 { 00:18:51.163 "auth": { 00:18:51.163 "dhgroup": "null", 00:18:51.163 "digest": "sha512", 00:18:51.163 "state": "completed" 00:18:51.163 }, 00:18:51.163 "cntlid": 97, 00:18:51.163 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:18:51.163 "listen_address": { 00:18:51.163 "adrfam": "IPv4", 00:18:51.163 "traddr": "10.0.0.3", 00:18:51.163 "trsvcid": "4420", 00:18:51.163 "trtype": "TCP" 00:18:51.163 }, 00:18:51.163 "peer_address": { 00:18:51.163 "adrfam": "IPv4", 00:18:51.163 "traddr": "10.0.0.1", 00:18:51.163 "trsvcid": "37294", 00:18:51.163 "trtype": "TCP" 00:18:51.163 }, 00:18:51.163 "qid": 0, 00:18:51.163 "state": "enabled", 00:18:51.163 "thread": "nvmf_tgt_poll_group_000" 00:18:51.163 } 00:18:51.163 ]' 00:18:51.163 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:51.163 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:51.163 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:51.163 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:51.163 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:51.163 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.163 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.163 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.421 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDA3YTNlNWY1MTIxNDU1ZjUyNDViNjM3M2FjMmI4OTg4Nzc2MDI0NDliY2U1MGJisiJ9Hw==: --dhchap-ctrl-secret DHHC-1:03:ZTAwZGFiOTE0OWMxODdkYTlmMTIxNjBhOGQzYWQ2ZGIxMjE3NjI0ODM0OTY4OGUxMjljYWU1MTVkNjlmNDQ4N2pyDWw=: 00:18:51.421 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid 9ca3beaa-4765-4780-96a0-c04fd97e7435 -l 0 --dhchap-secret DHHC-1:00:ZDA3YTNlNWY1MTIxNDU1ZjUyNDViNjM3M2FjMmI4OTg4Nzc2MDI0NDliY2U1MGJisiJ9Hw==: --dhchap-ctrl-secret DHHC-1:03:ZTAwZGFiOTE0OWMxODdkYTlmMTIxNjBhOGQzYWQ2ZGIxMjE3NjI0ODM0OTY4OGUxMjljYWU1MTVkNjlmNDQ4N2pyDWw=: 00:18:51.986 14:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.986 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.986 14:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:18:51.986 14:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.986 14:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.986 14:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.986 14:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:51.986 14:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:51.986 14:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:52.256 14:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:18:52.256 14:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:52.256 14:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:52.256 14:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:52.256 14:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:52.256 14:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:52.256 14:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.256 14:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.256 14:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.256 14:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.256 14:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.256 14:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.256 14:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.513 00:18:52.513 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:52.513 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:52.513 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.770 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.770 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:52.770 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.770 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.770 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.770 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:52.770 { 00:18:52.770 "auth": { 00:18:52.770 "dhgroup": "null", 00:18:52.770 "digest": "sha512", 00:18:52.770 "state": "completed" 00:18:52.770 }, 00:18:52.770 "cntlid": 99, 00:18:52.770 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:18:52.770 "listen_address": { 00:18:52.770 "adrfam": "IPv4", 00:18:52.770 "traddr": "10.0.0.3", 00:18:52.770 "trsvcid": "4420", 00:18:52.770 "trtype": "TCP" 00:18:52.770 }, 00:18:52.770 "peer_address": { 00:18:52.770 "adrfam": "IPv4", 00:18:52.770 "traddr": "10.0.0.1", 00:18:52.770 "trsvcid": "37324", 00:18:52.770 "trtype": "TCP" 00:18:52.770 }, 00:18:52.770 "qid": 0, 00:18:52.770 "state": "enabled", 00:18:52.770 "thread": "nvmf_tgt_poll_group_000" 00:18:52.770 } 00:18:52.770 ]' 00:18:52.770 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:52.770 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:52.770 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:52.770 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:52.770 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:52.770 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:52.770 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:52.770 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.028 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTgyZDkxODc4ZTc5ZDU5MjY5MmViMWIyZmZmNGJhNjd47jto: --dhchap-ctrl-secret DHHC-1:02:ODFmNzk4OWZjOTY2OGZhZDUxMjE0NmY1MTBmOTc3ZjU0ZGRmNjFiYmM1YWMyMDQwWOc6bA==: 00:18:53.028 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid 9ca3beaa-4765-4780-96a0-c04fd97e7435 -l 0 --dhchap-secret DHHC-1:01:ZTgyZDkxODc4ZTc5ZDU5MjY5MmViMWIyZmZmNGJhNjd47jto: --dhchap-ctrl-secret DHHC-1:02:ODFmNzk4OWZjOTY2OGZhZDUxMjE0NmY1MTBmOTc3ZjU0ZGRmNjFiYmM1YWMyMDQwWOc6bA==: 00:18:53.594 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:53.594 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:53.594 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:18:53.594 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.594 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.594 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.594 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:53.594 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:53.594 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:53.853 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:18:53.853 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:53.853 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:53.853 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:53.853 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:53.853 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:53.853 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:53.853 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.853 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.853 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.853 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:53.853 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:53.853 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:54.111 00:18:54.111 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:54.111 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:54.111 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.369 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.369 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.370 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.370 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.370 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.370 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:54.370 { 00:18:54.370 "auth": { 00:18:54.370 "dhgroup": "null", 00:18:54.370 "digest": "sha512", 00:18:54.370 "state": "completed" 00:18:54.370 }, 00:18:54.370 "cntlid": 101, 00:18:54.370 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:18:54.370 "listen_address": { 00:18:54.370 "adrfam": "IPv4", 00:18:54.370 "traddr": "10.0.0.3", 00:18:54.370 "trsvcid": "4420", 00:18:54.370 "trtype": "TCP" 00:18:54.370 }, 00:18:54.370 "peer_address": { 00:18:54.370 "adrfam": "IPv4", 00:18:54.370 "traddr": "10.0.0.1", 00:18:54.370 "trsvcid": "37342", 00:18:54.370 "trtype": "TCP" 00:18:54.370 }, 00:18:54.370 "qid": 0, 00:18:54.370 "state": "enabled", 00:18:54.370 "thread": "nvmf_tgt_poll_group_000" 00:18:54.370 } 00:18:54.370 ]' 00:18:54.370 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:54.370 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:54.370 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:54.370 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:54.370 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:54.628 14:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.628 14:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.628 14:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:54.628 14:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODFiN2I5NWNmYjgwZDNmYzliYzJkYTU4N2ZhN2I3MzE5NzhlYjUxMjIxZTQwNmY5EVCk5g==: --dhchap-ctrl-secret DHHC-1:01:ZmZhMjcwOTUwNzNhMWI1MDkxNGRiMmQ3NTQ1ZTNhYTH5sdOn: 00:18:54.628 14:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid 9ca3beaa-4765-4780-96a0-c04fd97e7435 -l 0 --dhchap-secret DHHC-1:02:ODFiN2I5NWNmYjgwZDNmYzliYzJkYTU4N2ZhN2I3MzE5NzhlYjUxMjIxZTQwNmY5EVCk5g==: --dhchap-ctrl-secret DHHC-1:01:ZmZhMjcwOTUwNzNhMWI1MDkxNGRiMmQ3NTQ1ZTNhYTH5sdOn: 00:18:55.194 14:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.194 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.194 14:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:18:55.194 14:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.194 14:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.194 14:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.194 14:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:55.194 14:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:55.194 14:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:55.454 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:18:55.454 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:55.454 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:55.454 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:55.454 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:55.454 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:55.454 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key3 00:18:55.454 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.454 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.454 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.454 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:55.454 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:55.454 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:55.712 00:18:55.712 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:55.712 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:55.712 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.971 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.971 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:55.971 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.971 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.971 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.971 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:55.971 { 00:18:55.971 "auth": { 00:18:55.971 "dhgroup": "null", 00:18:55.971 "digest": "sha512", 00:18:55.971 "state": "completed" 00:18:55.971 }, 00:18:55.971 "cntlid": 103, 00:18:55.971 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:18:55.971 "listen_address": { 00:18:55.971 "adrfam": "IPv4", 00:18:55.971 "traddr": "10.0.0.3", 00:18:55.971 "trsvcid": "4420", 00:18:55.971 "trtype": "TCP" 00:18:55.971 }, 00:18:55.971 "peer_address": { 00:18:55.971 "adrfam": "IPv4", 00:18:55.971 "traddr": "10.0.0.1", 00:18:55.971 "trsvcid": "37364", 00:18:55.971 "trtype": "TCP" 00:18:55.971 }, 00:18:55.971 "qid": 0, 00:18:55.971 "state": "enabled", 00:18:55.971 "thread": "nvmf_tgt_poll_group_000" 00:18:55.971 } 00:18:55.971 ]' 00:18:55.971 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:55.971 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:55.971 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:55.971 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:55.971 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:55.971 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.971 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.971 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:56.229 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmM3YWI1MDhlMGY2ZjYwNDExYjcyNGIzMjYzNjkxZmIxMjY5YmE3NGViYzYwZjQyNjVkNGZkNjA3NTk4NWY5MVH/20U=: 00:18:56.229 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid 9ca3beaa-4765-4780-96a0-c04fd97e7435 -l 0 --dhchap-secret DHHC-1:03:ZmM3YWI1MDhlMGY2ZjYwNDExYjcyNGIzMjYzNjkxZmIxMjY5YmE3NGViYzYwZjQyNjVkNGZkNjA3NTk4NWY5MVH/20U=: 00:18:56.801 14:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.801 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.801 14:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:18:56.801 14:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.801 14:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.801 14:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.801 14:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:56.801 14:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:56.801 14:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:56.801 14:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:57.060 14:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:18:57.060 14:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:57.060 14:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:57.060 14:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:57.060 14:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:57.060 14:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:57.060 14:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:57.060 14:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.060 14:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.060 14:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.060 14:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:57.060 14:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:57.060 14:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:57.319 00:18:57.319 14:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:57.319 14:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.319 14:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:57.577 14:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.577 14:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:57.577 14:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.577 14:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.577 14:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.577 14:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:57.577 { 00:18:57.577 "auth": { 00:18:57.577 "dhgroup": "ffdhe2048", 00:18:57.577 "digest": "sha512", 00:18:57.577 "state": "completed" 00:18:57.577 }, 00:18:57.577 "cntlid": 105, 00:18:57.577 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:18:57.577 "listen_address": { 00:18:57.577 "adrfam": "IPv4", 00:18:57.577 "traddr": "10.0.0.3", 00:18:57.577 "trsvcid": "4420", 00:18:57.577 "trtype": "TCP" 00:18:57.577 }, 00:18:57.577 "peer_address": { 00:18:57.577 "adrfam": "IPv4", 00:18:57.577 "traddr": "10.0.0.1", 00:18:57.577 "trsvcid": "37390", 00:18:57.577 "trtype": "TCP" 00:18:57.577 }, 00:18:57.577 "qid": 0, 00:18:57.578 "state": "enabled", 00:18:57.578 "thread": "nvmf_tgt_poll_group_000" 00:18:57.578 } 00:18:57.578 ]' 00:18:57.578 14:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:57.578 14:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:57.578 14:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:57.836 14:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:57.836 14:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:57.836 14:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.836 14:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.836 14:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:58.094 14:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDA3YTNlNWY1MTIxNDU1ZjUyNDViNjM3M2FjMmI4OTg4Nzc2MDI0NDliY2U1MGJisiJ9Hw==: --dhchap-ctrl-secret DHHC-1:03:ZTAwZGFiOTE0OWMxODdkYTlmMTIxNjBhOGQzYWQ2ZGIxMjE3NjI0ODM0OTY4OGUxMjljYWU1MTVkNjlmNDQ4N2pyDWw=: 00:18:58.094 14:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid 9ca3beaa-4765-4780-96a0-c04fd97e7435 -l 0 --dhchap-secret DHHC-1:00:ZDA3YTNlNWY1MTIxNDU1ZjUyNDViNjM3M2FjMmI4OTg4Nzc2MDI0NDliY2U1MGJisiJ9Hw==: --dhchap-ctrl-secret DHHC-1:03:ZTAwZGFiOTE0OWMxODdkYTlmMTIxNjBhOGQzYWQ2ZGIxMjE3NjI0ODM0OTY4OGUxMjljYWU1MTVkNjlmNDQ4N2pyDWw=: 00:18:58.659 14:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:58.659 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:58.659 14:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:18:58.659 14:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.659 14:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.659 14:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.659 14:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:58.659 14:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:58.659 14:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:58.659 14:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:18:58.659 14:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:58.659 14:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:58.659 14:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:58.659 14:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:58.659 14:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.659 14:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.659 14:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.659 14:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.659 14:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.659 14:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.659 14:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.659 14:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.917 00:18:59.176 14:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:59.176 14:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.176 14:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:59.176 14:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.176 14:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.176 14:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.176 14:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.176 14:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.176 14:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:59.176 { 00:18:59.176 "auth": { 00:18:59.176 "dhgroup": "ffdhe2048", 00:18:59.176 "digest": "sha512", 00:18:59.176 "state": "completed" 00:18:59.176 }, 00:18:59.176 "cntlid": 107, 00:18:59.176 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:18:59.176 "listen_address": { 00:18:59.176 "adrfam": "IPv4", 00:18:59.176 "traddr": "10.0.0.3", 00:18:59.176 "trsvcid": "4420", 00:18:59.176 "trtype": "TCP" 00:18:59.176 }, 00:18:59.176 "peer_address": { 00:18:59.176 "adrfam": "IPv4", 00:18:59.176 "traddr": "10.0.0.1", 00:18:59.176 "trsvcid": "60816", 00:18:59.176 "trtype": "TCP" 00:18:59.176 }, 00:18:59.176 "qid": 0, 00:18:59.176 "state": "enabled", 00:18:59.176 "thread": "nvmf_tgt_poll_group_000" 00:18:59.176 } 00:18:59.176 ]' 00:18:59.176 14:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:59.434 14:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:59.434 14:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:59.434 14:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:59.434 14:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:59.434 14:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:59.434 14:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.434 14:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.692 14:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTgyZDkxODc4ZTc5ZDU5MjY5MmViMWIyZmZmNGJhNjd47jto: --dhchap-ctrl-secret DHHC-1:02:ODFmNzk4OWZjOTY2OGZhZDUxMjE0NmY1MTBmOTc3ZjU0ZGRmNjFiYmM1YWMyMDQwWOc6bA==: 00:18:59.692 14:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid 9ca3beaa-4765-4780-96a0-c04fd97e7435 -l 0 --dhchap-secret DHHC-1:01:ZTgyZDkxODc4ZTc5ZDU5MjY5MmViMWIyZmZmNGJhNjd47jto: --dhchap-ctrl-secret DHHC-1:02:ODFmNzk4OWZjOTY2OGZhZDUxMjE0NmY1MTBmOTc3ZjU0ZGRmNjFiYmM1YWMyMDQwWOc6bA==: 00:19:00.258 14:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:00.258 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:00.258 14:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:19:00.258 14:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.258 14:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.258 14:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.258 14:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:00.258 14:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:00.258 14:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:00.517 14:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:19:00.517 14:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:00.517 14:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:00.517 14:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:00.517 14:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:00.517 14:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:00.517 14:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.517 14:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.517 14:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.517 14:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.517 14:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.517 14:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.517 14:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.774 00:19:00.774 14:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:00.774 14:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:00.774 14:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.774 14:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.774 14:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:00.774 14:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.774 14:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.774 14:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.032 14:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:01.032 { 00:19:01.032 "auth": { 00:19:01.032 "dhgroup": "ffdhe2048", 00:19:01.032 "digest": "sha512", 00:19:01.032 "state": "completed" 00:19:01.032 }, 00:19:01.032 "cntlid": 109, 00:19:01.032 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:19:01.032 "listen_address": { 00:19:01.032 "adrfam": "IPv4", 00:19:01.032 "traddr": "10.0.0.3", 00:19:01.032 "trsvcid": "4420", 00:19:01.032 "trtype": "TCP" 00:19:01.032 }, 00:19:01.032 "peer_address": { 00:19:01.032 "adrfam": "IPv4", 00:19:01.032 "traddr": "10.0.0.1", 00:19:01.032 "trsvcid": "60842", 00:19:01.032 "trtype": "TCP" 00:19:01.032 }, 00:19:01.032 "qid": 0, 00:19:01.032 "state": "enabled", 00:19:01.032 "thread": "nvmf_tgt_poll_group_000" 00:19:01.032 } 00:19:01.032 ]' 00:19:01.032 14:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:01.032 14:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:01.032 14:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:01.032 14:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:01.032 14:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:01.032 14:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:01.032 14:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.033 14:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.290 14:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODFiN2I5NWNmYjgwZDNmYzliYzJkYTU4N2ZhN2I3MzE5NzhlYjUxMjIxZTQwNmY5EVCk5g==: --dhchap-ctrl-secret DHHC-1:01:ZmZhMjcwOTUwNzNhMWI1MDkxNGRiMmQ3NTQ1ZTNhYTH5sdOn: 00:19:01.290 14:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid 9ca3beaa-4765-4780-96a0-c04fd97e7435 -l 0 --dhchap-secret DHHC-1:02:ODFiN2I5NWNmYjgwZDNmYzliYzJkYTU4N2ZhN2I3MzE5NzhlYjUxMjIxZTQwNmY5EVCk5g==: --dhchap-ctrl-secret DHHC-1:01:ZmZhMjcwOTUwNzNhMWI1MDkxNGRiMmQ3NTQ1ZTNhYTH5sdOn: 00:19:01.856 14:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.856 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.856 14:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:19:01.856 14:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.856 14:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.856 14:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.856 14:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:01.856 14:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:01.856 14:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:02.117 14:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:19:02.117 14:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:02.117 14:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:02.117 14:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:02.118 14:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:02.118 14:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:02.118 14:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key3 00:19:02.118 14:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.118 14:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.118 14:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.118 14:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:02.118 14:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:02.118 14:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:02.378 00:19:02.378 14:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:02.378 14:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:02.378 14:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.378 14:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.378 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.378 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.378 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.637 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.637 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:02.637 { 00:19:02.637 "auth": { 00:19:02.637 "dhgroup": "ffdhe2048", 00:19:02.637 "digest": "sha512", 00:19:02.637 "state": "completed" 00:19:02.637 }, 00:19:02.637 "cntlid": 111, 00:19:02.637 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:19:02.637 "listen_address": { 00:19:02.637 "adrfam": "IPv4", 00:19:02.637 "traddr": "10.0.0.3", 00:19:02.637 "trsvcid": "4420", 00:19:02.637 "trtype": "TCP" 00:19:02.637 }, 00:19:02.637 "peer_address": { 00:19:02.637 "adrfam": "IPv4", 00:19:02.637 "traddr": "10.0.0.1", 00:19:02.637 "trsvcid": "60874", 00:19:02.637 "trtype": "TCP" 00:19:02.637 }, 00:19:02.637 "qid": 0, 00:19:02.637 "state": "enabled", 00:19:02.637 "thread": "nvmf_tgt_poll_group_000" 00:19:02.637 } 00:19:02.637 ]' 00:19:02.637 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:02.637 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:02.637 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:02.637 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:02.637 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:02.637 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.637 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.637 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.895 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmM3YWI1MDhlMGY2ZjYwNDExYjcyNGIzMjYzNjkxZmIxMjY5YmE3NGViYzYwZjQyNjVkNGZkNjA3NTk4NWY5MVH/20U=: 00:19:02.895 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid 9ca3beaa-4765-4780-96a0-c04fd97e7435 -l 0 --dhchap-secret DHHC-1:03:ZmM3YWI1MDhlMGY2ZjYwNDExYjcyNGIzMjYzNjkxZmIxMjY5YmE3NGViYzYwZjQyNjVkNGZkNjA3NTk4NWY5MVH/20U=: 00:19:03.460 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:03.460 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:03.460 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:19:03.460 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.460 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.460 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.460 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:03.460 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:03.460 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:03.460 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:03.718 14:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:19:03.718 14:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:03.718 14:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:03.718 14:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:03.718 14:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:03.718 14:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:03.718 14:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:03.718 14:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.718 14:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.718 14:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.718 14:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:03.718 14:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:03.718 14:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:03.975 00:19:03.975 14:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:03.975 14:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.975 14:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:04.234 14:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.234 14:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.234 14:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.234 14:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.234 14:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.234 14:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:04.234 { 00:19:04.234 "auth": { 00:19:04.234 "dhgroup": "ffdhe3072", 00:19:04.234 "digest": "sha512", 00:19:04.234 "state": "completed" 00:19:04.234 }, 00:19:04.234 "cntlid": 113, 00:19:04.234 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:19:04.234 "listen_address": { 00:19:04.234 "adrfam": "IPv4", 00:19:04.234 "traddr": "10.0.0.3", 00:19:04.234 "trsvcid": "4420", 00:19:04.234 "trtype": "TCP" 00:19:04.234 }, 00:19:04.234 "peer_address": { 00:19:04.234 "adrfam": "IPv4", 00:19:04.234 "traddr": "10.0.0.1", 00:19:04.234 "trsvcid": "60910", 00:19:04.234 "trtype": "TCP" 00:19:04.234 }, 00:19:04.234 "qid": 0, 00:19:04.234 "state": "enabled", 00:19:04.234 "thread": "nvmf_tgt_poll_group_000" 00:19:04.234 } 00:19:04.234 ]' 00:19:04.234 14:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:04.234 14:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:04.234 14:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:04.234 14:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:04.234 14:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:04.234 14:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:04.234 14:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:04.234 14:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:04.492 14:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDA3YTNlNWY1MTIxNDU1ZjUyNDViNjM3M2FjMmI4OTg4Nzc2MDI0NDliY2U1MGJisiJ9Hw==: --dhchap-ctrl-secret DHHC-1:03:ZTAwZGFiOTE0OWMxODdkYTlmMTIxNjBhOGQzYWQ2ZGIxMjE3NjI0ODM0OTY4OGUxMjljYWU1MTVkNjlmNDQ4N2pyDWw=: 00:19:04.492 14:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid 9ca3beaa-4765-4780-96a0-c04fd97e7435 -l 0 --dhchap-secret DHHC-1:00:ZDA3YTNlNWY1MTIxNDU1ZjUyNDViNjM3M2FjMmI4OTg4Nzc2MDI0NDliY2U1MGJisiJ9Hw==: --dhchap-ctrl-secret DHHC-1:03:ZTAwZGFiOTE0OWMxODdkYTlmMTIxNjBhOGQzYWQ2ZGIxMjE3NjI0ODM0OTY4OGUxMjljYWU1MTVkNjlmNDQ4N2pyDWw=: 00:19:05.100 14:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.100 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.100 14:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:19:05.100 14:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.100 14:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.100 14:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.100 14:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:05.100 14:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:05.100 14:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:05.358 14:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:19:05.358 14:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:05.358 14:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:05.358 14:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:05.358 14:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:05.358 14:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:05.358 14:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.358 14:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.358 14:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.358 14:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.358 14:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.358 14:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.358 14:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.616 00:19:05.616 14:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:05.616 14:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.616 14:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:05.874 14:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.874 14:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.874 14:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.874 14:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.874 14:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.874 14:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:05.874 { 00:19:05.874 "auth": { 00:19:05.874 "dhgroup": "ffdhe3072", 00:19:05.875 "digest": "sha512", 00:19:05.875 "state": "completed" 00:19:05.875 }, 00:19:05.875 "cntlid": 115, 00:19:05.875 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:19:05.875 "listen_address": { 00:19:05.875 "adrfam": "IPv4", 00:19:05.875 "traddr": "10.0.0.3", 00:19:05.875 "trsvcid": "4420", 00:19:05.875 "trtype": "TCP" 00:19:05.875 }, 00:19:05.875 "peer_address": { 00:19:05.875 "adrfam": "IPv4", 00:19:05.875 "traddr": "10.0.0.1", 00:19:05.875 "trsvcid": "60920", 00:19:05.875 "trtype": "TCP" 00:19:05.875 }, 00:19:05.875 "qid": 0, 00:19:05.875 "state": "enabled", 00:19:05.875 "thread": "nvmf_tgt_poll_group_000" 00:19:05.875 } 00:19:05.875 ]' 00:19:05.875 14:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:05.875 14:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:05.875 14:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:05.875 14:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:05.875 14:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:05.875 14:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.875 14:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.875 14:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:06.133 14:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTgyZDkxODc4ZTc5ZDU5MjY5MmViMWIyZmZmNGJhNjd47jto: --dhchap-ctrl-secret DHHC-1:02:ODFmNzk4OWZjOTY2OGZhZDUxMjE0NmY1MTBmOTc3ZjU0ZGRmNjFiYmM1YWMyMDQwWOc6bA==: 00:19:06.133 14:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid 9ca3beaa-4765-4780-96a0-c04fd97e7435 -l 0 --dhchap-secret DHHC-1:01:ZTgyZDkxODc4ZTc5ZDU5MjY5MmViMWIyZmZmNGJhNjd47jto: --dhchap-ctrl-secret DHHC-1:02:ODFmNzk4OWZjOTY2OGZhZDUxMjE0NmY1MTBmOTc3ZjU0ZGRmNjFiYmM1YWMyMDQwWOc6bA==: 00:19:06.698 14:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.699 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.699 14:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:19:06.699 14:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.699 14:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.699 14:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.699 14:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:06.699 14:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:06.699 14:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:06.956 14:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:19:06.957 14:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:06.957 14:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:06.957 14:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:06.957 14:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:06.957 14:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.957 14:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:06.957 14:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.957 14:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.957 14:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.957 14:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:06.957 14:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:06.957 14:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.214 00:19:07.214 14:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:07.214 14:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:07.214 14:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.473 14:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.473 14:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:07.473 14:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.473 14:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.473 14:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.473 14:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:07.473 { 00:19:07.473 "auth": { 00:19:07.473 "dhgroup": "ffdhe3072", 00:19:07.473 "digest": "sha512", 00:19:07.473 "state": "completed" 00:19:07.473 }, 00:19:07.473 "cntlid": 117, 00:19:07.473 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:19:07.473 "listen_address": { 00:19:07.473 "adrfam": "IPv4", 00:19:07.473 "traddr": "10.0.0.3", 00:19:07.473 "trsvcid": "4420", 00:19:07.473 "trtype": "TCP" 00:19:07.473 }, 00:19:07.473 "peer_address": { 00:19:07.473 "adrfam": "IPv4", 00:19:07.473 "traddr": "10.0.0.1", 00:19:07.473 "trsvcid": "60962", 00:19:07.473 "trtype": "TCP" 00:19:07.473 }, 00:19:07.473 "qid": 0, 00:19:07.473 "state": "enabled", 00:19:07.473 "thread": "nvmf_tgt_poll_group_000" 00:19:07.473 } 00:19:07.473 ]' 00:19:07.473 14:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:07.473 14:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:07.473 14:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:07.473 14:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:07.473 14:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:07.473 14:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.473 14:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.473 14:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.740 14:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODFiN2I5NWNmYjgwZDNmYzliYzJkYTU4N2ZhN2I3MzE5NzhlYjUxMjIxZTQwNmY5EVCk5g==: --dhchap-ctrl-secret DHHC-1:01:ZmZhMjcwOTUwNzNhMWI1MDkxNGRiMmQ3NTQ1ZTNhYTH5sdOn: 00:19:07.740 14:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid 9ca3beaa-4765-4780-96a0-c04fd97e7435 -l 0 --dhchap-secret DHHC-1:02:ODFiN2I5NWNmYjgwZDNmYzliYzJkYTU4N2ZhN2I3MzE5NzhlYjUxMjIxZTQwNmY5EVCk5g==: --dhchap-ctrl-secret DHHC-1:01:ZmZhMjcwOTUwNzNhMWI1MDkxNGRiMmQ3NTQ1ZTNhYTH5sdOn: 00:19:08.305 14:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:08.305 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:08.305 14:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:19:08.305 14:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.305 14:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.305 14:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.305 14:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:08.305 14:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:08.305 14:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:08.563 14:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:19:08.563 14:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:08.563 14:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:08.563 14:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:08.563 14:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:08.563 14:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:08.563 14:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key3 00:19:08.563 14:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.563 14:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.563 14:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.563 14:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:08.563 14:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:08.563 14:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:08.821 00:19:08.821 14:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:08.821 14:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.821 14:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:09.080 14:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.080 14:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:09.080 14:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.080 14:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.080 14:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.080 14:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:09.080 { 00:19:09.080 "auth": { 00:19:09.080 "dhgroup": "ffdhe3072", 00:19:09.080 "digest": "sha512", 00:19:09.080 "state": "completed" 00:19:09.080 }, 00:19:09.080 "cntlid": 119, 00:19:09.080 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:19:09.080 "listen_address": { 00:19:09.080 "adrfam": "IPv4", 00:19:09.080 "traddr": "10.0.0.3", 00:19:09.080 "trsvcid": "4420", 00:19:09.080 "trtype": "TCP" 00:19:09.080 }, 00:19:09.080 "peer_address": { 00:19:09.080 "adrfam": "IPv4", 00:19:09.080 "traddr": "10.0.0.1", 00:19:09.080 "trsvcid": "45004", 00:19:09.080 "trtype": "TCP" 00:19:09.080 }, 00:19:09.080 "qid": 0, 00:19:09.080 "state": "enabled", 00:19:09.080 "thread": "nvmf_tgt_poll_group_000" 00:19:09.080 } 00:19:09.080 ]' 00:19:09.080 14:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:09.080 14:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:09.080 14:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:09.338 14:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:09.338 14:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:09.338 14:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:09.338 14:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:09.338 14:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.596 14:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmM3YWI1MDhlMGY2ZjYwNDExYjcyNGIzMjYzNjkxZmIxMjY5YmE3NGViYzYwZjQyNjVkNGZkNjA3NTk4NWY5MVH/20U=: 00:19:09.596 14:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid 9ca3beaa-4765-4780-96a0-c04fd97e7435 -l 0 --dhchap-secret DHHC-1:03:ZmM3YWI1MDhlMGY2ZjYwNDExYjcyNGIzMjYzNjkxZmIxMjY5YmE3NGViYzYwZjQyNjVkNGZkNjA3NTk4NWY5MVH/20U=: 00:19:10.164 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.164 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.164 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:19:10.164 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.164 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.164 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.164 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:10.164 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:10.164 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:10.164 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:10.454 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:19:10.454 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:10.454 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:10.454 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:10.454 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:10.454 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:10.454 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:10.454 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.454 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.454 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.454 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:10.454 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:10.454 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:10.712 00:19:10.712 14:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:10.712 14:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.712 14:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:10.970 14:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.970 14:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.970 14:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.970 14:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.970 14:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.970 14:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:10.970 { 00:19:10.970 "auth": { 00:19:10.970 "dhgroup": "ffdhe4096", 00:19:10.970 "digest": "sha512", 00:19:10.970 "state": "completed" 00:19:10.970 }, 00:19:10.970 "cntlid": 121, 00:19:10.970 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:19:10.970 "listen_address": { 00:19:10.970 "adrfam": "IPv4", 00:19:10.970 "traddr": "10.0.0.3", 00:19:10.970 "trsvcid": "4420", 00:19:10.970 "trtype": "TCP" 00:19:10.970 }, 00:19:10.970 "peer_address": { 00:19:10.970 "adrfam": "IPv4", 00:19:10.970 "traddr": "10.0.0.1", 00:19:10.970 "trsvcid": "45020", 00:19:10.970 "trtype": "TCP" 00:19:10.970 }, 00:19:10.970 "qid": 0, 00:19:10.970 "state": "enabled", 00:19:10.970 "thread": "nvmf_tgt_poll_group_000" 00:19:10.970 } 00:19:10.970 ]' 00:19:10.970 14:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:10.970 14:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:10.970 14:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:10.970 14:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:10.970 14:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:10.970 14:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.970 14:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.970 14:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.228 14:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDA3YTNlNWY1MTIxNDU1ZjUyNDViNjM3M2FjMmI4OTg4Nzc2MDI0NDliY2U1MGJisiJ9Hw==: --dhchap-ctrl-secret DHHC-1:03:ZTAwZGFiOTE0OWMxODdkYTlmMTIxNjBhOGQzYWQ2ZGIxMjE3NjI0ODM0OTY4OGUxMjljYWU1MTVkNjlmNDQ4N2pyDWw=: 00:19:11.228 14:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid 9ca3beaa-4765-4780-96a0-c04fd97e7435 -l 0 --dhchap-secret DHHC-1:00:ZDA3YTNlNWY1MTIxNDU1ZjUyNDViNjM3M2FjMmI4OTg4Nzc2MDI0NDliY2U1MGJisiJ9Hw==: --dhchap-ctrl-secret DHHC-1:03:ZTAwZGFiOTE0OWMxODdkYTlmMTIxNjBhOGQzYWQ2ZGIxMjE3NjI0ODM0OTY4OGUxMjljYWU1MTVkNjlmNDQ4N2pyDWw=: 00:19:11.796 14:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:11.796 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:11.796 14:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:19:11.796 14:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.796 14:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.796 14:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.796 14:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:11.796 14:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:11.796 14:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:12.054 14:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:19:12.054 14:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:12.054 14:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:12.054 14:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:12.054 14:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:12.054 14:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:12.055 14:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:12.055 14:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.055 14:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.055 14:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.055 14:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:12.055 14:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:12.055 14:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:12.311 00:19:12.311 14:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:12.311 14:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:12.311 14:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:12.569 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.569 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:12.569 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.569 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.569 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.569 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:12.569 { 00:19:12.569 "auth": { 00:19:12.569 "dhgroup": "ffdhe4096", 00:19:12.569 "digest": "sha512", 00:19:12.569 "state": "completed" 00:19:12.569 }, 00:19:12.569 "cntlid": 123, 00:19:12.569 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:19:12.569 "listen_address": { 00:19:12.569 "adrfam": "IPv4", 00:19:12.569 "traddr": "10.0.0.3", 00:19:12.569 "trsvcid": "4420", 00:19:12.569 "trtype": "TCP" 00:19:12.569 }, 00:19:12.569 "peer_address": { 00:19:12.569 "adrfam": "IPv4", 00:19:12.569 "traddr": "10.0.0.1", 00:19:12.569 "trsvcid": "45048", 00:19:12.569 "trtype": "TCP" 00:19:12.569 }, 00:19:12.569 "qid": 0, 00:19:12.569 "state": "enabled", 00:19:12.569 "thread": "nvmf_tgt_poll_group_000" 00:19:12.569 } 00:19:12.569 ]' 00:19:12.569 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:12.569 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:12.569 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:12.569 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:12.569 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:12.569 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:12.569 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:12.569 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.826 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTgyZDkxODc4ZTc5ZDU5MjY5MmViMWIyZmZmNGJhNjd47jto: --dhchap-ctrl-secret DHHC-1:02:ODFmNzk4OWZjOTY2OGZhZDUxMjE0NmY1MTBmOTc3ZjU0ZGRmNjFiYmM1YWMyMDQwWOc6bA==: 00:19:12.827 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid 9ca3beaa-4765-4780-96a0-c04fd97e7435 -l 0 --dhchap-secret DHHC-1:01:ZTgyZDkxODc4ZTc5ZDU5MjY5MmViMWIyZmZmNGJhNjd47jto: --dhchap-ctrl-secret DHHC-1:02:ODFmNzk4OWZjOTY2OGZhZDUxMjE0NmY1MTBmOTc3ZjU0ZGRmNjFiYmM1YWMyMDQwWOc6bA==: 00:19:13.392 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:13.392 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:13.392 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:19:13.392 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.392 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.392 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.392 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:13.392 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:13.392 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:13.650 14:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:19:13.650 14:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:13.650 14:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:13.650 14:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:13.650 14:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:13.650 14:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:13.650 14:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:13.650 14:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.650 14:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.650 14:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.650 14:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:13.650 14:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:13.650 14:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:13.908 00:19:13.908 14:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:13.908 14:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:13.908 14:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:14.166 14:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.166 14:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:14.166 14:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.166 14:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.166 14:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.166 14:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:14.166 { 00:19:14.166 "auth": { 00:19:14.166 "dhgroup": "ffdhe4096", 00:19:14.166 "digest": "sha512", 00:19:14.166 "state": "completed" 00:19:14.166 }, 00:19:14.166 "cntlid": 125, 00:19:14.166 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:19:14.166 "listen_address": { 00:19:14.166 "adrfam": "IPv4", 00:19:14.166 "traddr": "10.0.0.3", 00:19:14.166 "trsvcid": "4420", 00:19:14.166 "trtype": "TCP" 00:19:14.166 }, 00:19:14.166 "peer_address": { 00:19:14.166 "adrfam": "IPv4", 00:19:14.166 "traddr": "10.0.0.1", 00:19:14.166 "trsvcid": "45076", 00:19:14.166 "trtype": "TCP" 00:19:14.166 }, 00:19:14.166 "qid": 0, 00:19:14.166 "state": "enabled", 00:19:14.166 "thread": "nvmf_tgt_poll_group_000" 00:19:14.166 } 00:19:14.166 ]' 00:19:14.166 14:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:14.166 14:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:14.166 14:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:14.166 14:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:14.166 14:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:14.166 14:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:14.166 14:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:14.166 14:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.424 14:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODFiN2I5NWNmYjgwZDNmYzliYzJkYTU4N2ZhN2I3MzE5NzhlYjUxMjIxZTQwNmY5EVCk5g==: --dhchap-ctrl-secret DHHC-1:01:ZmZhMjcwOTUwNzNhMWI1MDkxNGRiMmQ3NTQ1ZTNhYTH5sdOn: 00:19:14.424 14:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid 9ca3beaa-4765-4780-96a0-c04fd97e7435 -l 0 --dhchap-secret DHHC-1:02:ODFiN2I5NWNmYjgwZDNmYzliYzJkYTU4N2ZhN2I3MzE5NzhlYjUxMjIxZTQwNmY5EVCk5g==: --dhchap-ctrl-secret DHHC-1:01:ZmZhMjcwOTUwNzNhMWI1MDkxNGRiMmQ3NTQ1ZTNhYTH5sdOn: 00:19:14.990 14:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:14.990 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:14.991 14:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:19:14.991 14:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.991 14:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.991 14:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.991 14:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:14.991 14:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:14.991 14:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:15.249 14:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:19:15.249 14:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:15.249 14:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:15.249 14:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:15.249 14:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:15.249 14:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:15.249 14:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key3 00:19:15.249 14:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.249 14:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.249 14:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.249 14:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:15.249 14:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:15.249 14:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:15.507 00:19:15.507 14:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:15.507 14:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:15.507 14:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.766 14:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.766 14:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.766 14:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.766 14:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.766 14:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.766 14:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:15.766 { 00:19:15.766 "auth": { 00:19:15.766 "dhgroup": "ffdhe4096", 00:19:15.766 "digest": "sha512", 00:19:15.766 "state": "completed" 00:19:15.766 }, 00:19:15.766 "cntlid": 127, 00:19:15.766 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:19:15.766 "listen_address": { 00:19:15.766 "adrfam": "IPv4", 00:19:15.766 "traddr": "10.0.0.3", 00:19:15.766 "trsvcid": "4420", 00:19:15.766 "trtype": "TCP" 00:19:15.766 }, 00:19:15.766 "peer_address": { 00:19:15.766 "adrfam": "IPv4", 00:19:15.766 "traddr": "10.0.0.1", 00:19:15.766 "trsvcid": "45106", 00:19:15.766 "trtype": "TCP" 00:19:15.766 }, 00:19:15.766 "qid": 0, 00:19:15.766 "state": "enabled", 00:19:15.766 "thread": "nvmf_tgt_poll_group_000" 00:19:15.766 } 00:19:15.766 ]' 00:19:15.766 14:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:15.766 14:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:15.766 14:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:15.766 14:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:15.766 14:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:16.024 14:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:16.024 14:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.024 14:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.024 14:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmM3YWI1MDhlMGY2ZjYwNDExYjcyNGIzMjYzNjkxZmIxMjY5YmE3NGViYzYwZjQyNjVkNGZkNjA3NTk4NWY5MVH/20U=: 00:19:16.024 14:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid 9ca3beaa-4765-4780-96a0-c04fd97e7435 -l 0 --dhchap-secret DHHC-1:03:ZmM3YWI1MDhlMGY2ZjYwNDExYjcyNGIzMjYzNjkxZmIxMjY5YmE3NGViYzYwZjQyNjVkNGZkNjA3NTk4NWY5MVH/20U=: 00:19:16.590 14:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.590 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.590 14:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:19:16.590 14:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.590 14:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.590 14:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.590 14:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:16.591 14:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:16.591 14:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:16.591 14:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:16.848 14:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:19:16.848 14:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:16.848 14:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:16.848 14:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:16.848 14:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:16.848 14:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:16.848 14:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.848 14:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.848 14:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.848 14:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.848 14:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.848 14:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.848 14:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.413 00:19:17.413 14:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:17.413 14:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:17.413 14:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.413 14:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.413 14:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.413 14:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.413 14:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.413 14:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.413 14:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:17.413 { 00:19:17.413 "auth": { 00:19:17.413 "dhgroup": "ffdhe6144", 00:19:17.413 "digest": "sha512", 00:19:17.413 "state": "completed" 00:19:17.413 }, 00:19:17.413 "cntlid": 129, 00:19:17.413 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:19:17.413 "listen_address": { 00:19:17.413 "adrfam": "IPv4", 00:19:17.413 "traddr": "10.0.0.3", 00:19:17.413 "trsvcid": "4420", 00:19:17.413 "trtype": "TCP" 00:19:17.413 }, 00:19:17.413 "peer_address": { 00:19:17.413 "adrfam": "IPv4", 00:19:17.413 "traddr": "10.0.0.1", 00:19:17.413 "trsvcid": "45134", 00:19:17.413 "trtype": "TCP" 00:19:17.413 }, 00:19:17.413 "qid": 0, 00:19:17.413 "state": "enabled", 00:19:17.413 "thread": "nvmf_tgt_poll_group_000" 00:19:17.413 } 00:19:17.413 ]' 00:19:17.413 14:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:17.672 14:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:17.672 14:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:17.672 14:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:17.672 14:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:17.672 14:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.672 14:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.672 14:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.672 14:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDA3YTNlNWY1MTIxNDU1ZjUyNDViNjM3M2FjMmI4OTg4Nzc2MDI0NDliY2U1MGJisiJ9Hw==: --dhchap-ctrl-secret DHHC-1:03:ZTAwZGFiOTE0OWMxODdkYTlmMTIxNjBhOGQzYWQ2ZGIxMjE3NjI0ODM0OTY4OGUxMjljYWU1MTVkNjlmNDQ4N2pyDWw=: 00:19:17.672 14:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid 9ca3beaa-4765-4780-96a0-c04fd97e7435 -l 0 --dhchap-secret DHHC-1:00:ZDA3YTNlNWY1MTIxNDU1ZjUyNDViNjM3M2FjMmI4OTg4Nzc2MDI0NDliY2U1MGJisiJ9Hw==: --dhchap-ctrl-secret DHHC-1:03:ZTAwZGFiOTE0OWMxODdkYTlmMTIxNjBhOGQzYWQ2ZGIxMjE3NjI0ODM0OTY4OGUxMjljYWU1MTVkNjlmNDQ4N2pyDWw=: 00:19:18.238 14:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:18.238 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:18.238 14:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:19:18.238 14:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.238 14:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.238 14:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.238 14:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:18.238 14:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:18.238 14:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:18.496 14:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:19:18.496 14:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:18.496 14:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:18.496 14:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:18.496 14:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:18.496 14:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:18.496 14:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.496 14:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.496 14:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.496 14:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.496 14:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.496 14:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.496 14:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:19.062 00:19:19.062 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:19.062 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:19.062 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:19.062 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.062 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:19.062 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.062 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.062 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.062 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:19.062 { 00:19:19.062 "auth": { 00:19:19.062 "dhgroup": "ffdhe6144", 00:19:19.062 "digest": "sha512", 00:19:19.062 "state": "completed" 00:19:19.062 }, 00:19:19.062 "cntlid": 131, 00:19:19.062 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:19:19.062 "listen_address": { 00:19:19.062 "adrfam": "IPv4", 00:19:19.062 "traddr": "10.0.0.3", 00:19:19.062 "trsvcid": "4420", 00:19:19.062 "trtype": "TCP" 00:19:19.062 }, 00:19:19.062 "peer_address": { 00:19:19.062 "adrfam": "IPv4", 00:19:19.062 "traddr": "10.0.0.1", 00:19:19.062 "trsvcid": "46706", 00:19:19.062 "trtype": "TCP" 00:19:19.062 }, 00:19:19.062 "qid": 0, 00:19:19.062 "state": "enabled", 00:19:19.062 "thread": "nvmf_tgt_poll_group_000" 00:19:19.062 } 00:19:19.062 ]' 00:19:19.062 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:19.062 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:19.062 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:19.321 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:19.321 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:19.321 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:19.321 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:19.321 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.578 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTgyZDkxODc4ZTc5ZDU5MjY5MmViMWIyZmZmNGJhNjd47jto: --dhchap-ctrl-secret DHHC-1:02:ODFmNzk4OWZjOTY2OGZhZDUxMjE0NmY1MTBmOTc3ZjU0ZGRmNjFiYmM1YWMyMDQwWOc6bA==: 00:19:19.579 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid 9ca3beaa-4765-4780-96a0-c04fd97e7435 -l 0 --dhchap-secret DHHC-1:01:ZTgyZDkxODc4ZTc5ZDU5MjY5MmViMWIyZmZmNGJhNjd47jto: --dhchap-ctrl-secret DHHC-1:02:ODFmNzk4OWZjOTY2OGZhZDUxMjE0NmY1MTBmOTc3ZjU0ZGRmNjFiYmM1YWMyMDQwWOc6bA==: 00:19:20.145 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:20.145 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:20.145 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:19:20.145 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.145 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.145 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.145 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:20.145 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:20.145 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:20.145 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:19:20.145 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:20.145 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:20.145 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:20.145 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:20.145 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:20.145 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.145 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.145 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.145 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.145 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.145 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.145 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.712 00:19:20.712 14:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:20.712 14:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.712 14:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:20.712 14:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.712 14:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.712 14:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.712 14:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.712 14:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.712 14:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:20.712 { 00:19:20.712 "auth": { 00:19:20.712 "dhgroup": "ffdhe6144", 00:19:20.712 "digest": "sha512", 00:19:20.712 "state": "completed" 00:19:20.712 }, 00:19:20.712 "cntlid": 133, 00:19:20.712 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:19:20.712 "listen_address": { 00:19:20.712 "adrfam": "IPv4", 00:19:20.712 "traddr": "10.0.0.3", 00:19:20.712 "trsvcid": "4420", 00:19:20.712 "trtype": "TCP" 00:19:20.712 }, 00:19:20.712 "peer_address": { 00:19:20.712 "adrfam": "IPv4", 00:19:20.712 "traddr": "10.0.0.1", 00:19:20.712 "trsvcid": "46748", 00:19:20.712 "trtype": "TCP" 00:19:20.712 }, 00:19:20.712 "qid": 0, 00:19:20.712 "state": "enabled", 00:19:20.712 "thread": "nvmf_tgt_poll_group_000" 00:19:20.712 } 00:19:20.712 ]' 00:19:20.712 14:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:20.970 14:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:20.971 14:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:20.971 14:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:20.971 14:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:20.971 14:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.971 14:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.971 14:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.228 14:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODFiN2I5NWNmYjgwZDNmYzliYzJkYTU4N2ZhN2I3MzE5NzhlYjUxMjIxZTQwNmY5EVCk5g==: --dhchap-ctrl-secret DHHC-1:01:ZmZhMjcwOTUwNzNhMWI1MDkxNGRiMmQ3NTQ1ZTNhYTH5sdOn: 00:19:21.228 14:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid 9ca3beaa-4765-4780-96a0-c04fd97e7435 -l 0 --dhchap-secret DHHC-1:02:ODFiN2I5NWNmYjgwZDNmYzliYzJkYTU4N2ZhN2I3MzE5NzhlYjUxMjIxZTQwNmY5EVCk5g==: --dhchap-ctrl-secret DHHC-1:01:ZmZhMjcwOTUwNzNhMWI1MDkxNGRiMmQ3NTQ1ZTNhYTH5sdOn: 00:19:21.795 14:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.795 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.795 14:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:19:21.795 14:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.795 14:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.795 14:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.795 14:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:21.795 14:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:21.795 14:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:22.054 14:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:19:22.054 14:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:22.054 14:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:22.054 14:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:22.054 14:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:22.054 14:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.054 14:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key3 00:19:22.054 14:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.054 14:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.054 14:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.054 14:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:22.054 14:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:22.054 14:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:22.310 00:19:22.310 14:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:22.310 14:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:22.310 14:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.570 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.570 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.570 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.570 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.570 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.570 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:22.570 { 00:19:22.570 "auth": { 00:19:22.570 "dhgroup": "ffdhe6144", 00:19:22.570 "digest": "sha512", 00:19:22.570 "state": "completed" 00:19:22.570 }, 00:19:22.570 "cntlid": 135, 00:19:22.570 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:19:22.570 "listen_address": { 00:19:22.570 "adrfam": "IPv4", 00:19:22.570 "traddr": "10.0.0.3", 00:19:22.570 "trsvcid": "4420", 00:19:22.570 "trtype": "TCP" 00:19:22.570 }, 00:19:22.570 "peer_address": { 00:19:22.570 "adrfam": "IPv4", 00:19:22.570 "traddr": "10.0.0.1", 00:19:22.570 "trsvcid": "46768", 00:19:22.570 "trtype": "TCP" 00:19:22.570 }, 00:19:22.570 "qid": 0, 00:19:22.570 "state": "enabled", 00:19:22.570 "thread": "nvmf_tgt_poll_group_000" 00:19:22.570 } 00:19:22.570 ]' 00:19:22.570 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:22.570 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:22.570 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:22.570 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:22.570 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:22.570 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.570 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.570 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:22.841 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmM3YWI1MDhlMGY2ZjYwNDExYjcyNGIzMjYzNjkxZmIxMjY5YmE3NGViYzYwZjQyNjVkNGZkNjA3NTk4NWY5MVH/20U=: 00:19:22.841 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid 9ca3beaa-4765-4780-96a0-c04fd97e7435 -l 0 --dhchap-secret DHHC-1:03:ZmM3YWI1MDhlMGY2ZjYwNDExYjcyNGIzMjYzNjkxZmIxMjY5YmE3NGViYzYwZjQyNjVkNGZkNjA3NTk4NWY5MVH/20U=: 00:19:23.408 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:23.408 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:23.408 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:19:23.408 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.408 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.408 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.408 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:23.408 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:23.408 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:23.408 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:23.666 14:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:19:23.666 14:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:23.666 14:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:23.666 14:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:23.666 14:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:23.666 14:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:23.666 14:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.666 14:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.666 14:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.666 14:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.666 14:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.666 14:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.666 14:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:24.232 00:19:24.232 14:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:24.232 14:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:24.232 14:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.490 14:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.490 14:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:24.490 14:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.490 14:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.490 14:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.490 14:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:24.490 { 00:19:24.490 "auth": { 00:19:24.490 "dhgroup": "ffdhe8192", 00:19:24.490 "digest": "sha512", 00:19:24.490 "state": "completed" 00:19:24.490 }, 00:19:24.490 "cntlid": 137, 00:19:24.490 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:19:24.490 "listen_address": { 00:19:24.490 "adrfam": "IPv4", 00:19:24.490 "traddr": "10.0.0.3", 00:19:24.490 "trsvcid": "4420", 00:19:24.490 "trtype": "TCP" 00:19:24.491 }, 00:19:24.491 "peer_address": { 00:19:24.491 "adrfam": "IPv4", 00:19:24.491 "traddr": "10.0.0.1", 00:19:24.491 "trsvcid": "46776", 00:19:24.491 "trtype": "TCP" 00:19:24.491 }, 00:19:24.491 "qid": 0, 00:19:24.491 "state": "enabled", 00:19:24.491 "thread": "nvmf_tgt_poll_group_000" 00:19:24.491 } 00:19:24.491 ]' 00:19:24.491 14:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:24.491 14:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:24.491 14:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:24.491 14:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:24.491 14:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:24.491 14:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:24.491 14:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.491 14:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:24.749 14:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDA3YTNlNWY1MTIxNDU1ZjUyNDViNjM3M2FjMmI4OTg4Nzc2MDI0NDliY2U1MGJisiJ9Hw==: --dhchap-ctrl-secret DHHC-1:03:ZTAwZGFiOTE0OWMxODdkYTlmMTIxNjBhOGQzYWQ2ZGIxMjE3NjI0ODM0OTY4OGUxMjljYWU1MTVkNjlmNDQ4N2pyDWw=: 00:19:24.749 14:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid 9ca3beaa-4765-4780-96a0-c04fd97e7435 -l 0 --dhchap-secret DHHC-1:00:ZDA3YTNlNWY1MTIxNDU1ZjUyNDViNjM3M2FjMmI4OTg4Nzc2MDI0NDliY2U1MGJisiJ9Hw==: --dhchap-ctrl-secret DHHC-1:03:ZTAwZGFiOTE0OWMxODdkYTlmMTIxNjBhOGQzYWQ2ZGIxMjE3NjI0ODM0OTY4OGUxMjljYWU1MTVkNjlmNDQ4N2pyDWw=: 00:19:25.331 14:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.331 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.331 14:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:19:25.331 14:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.331 14:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.331 14:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.331 14:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:25.331 14:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:25.331 14:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:25.589 14:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:19:25.589 14:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:25.589 14:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:25.589 14:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:25.589 14:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:25.589 14:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:25.589 14:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:25.589 14:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.589 14:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.589 14:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.589 14:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:25.589 14:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:25.589 14:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:26.155 00:19:26.155 14:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:26.155 14:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:26.155 14:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.155 14:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.155 14:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.155 14:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.155 14:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.155 14:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.155 14:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:26.155 { 00:19:26.155 "auth": { 00:19:26.155 "dhgroup": "ffdhe8192", 00:19:26.155 "digest": "sha512", 00:19:26.155 "state": "completed" 00:19:26.155 }, 00:19:26.155 "cntlid": 139, 00:19:26.155 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:19:26.155 "listen_address": { 00:19:26.155 "adrfam": "IPv4", 00:19:26.155 "traddr": "10.0.0.3", 00:19:26.155 "trsvcid": "4420", 00:19:26.155 "trtype": "TCP" 00:19:26.155 }, 00:19:26.155 "peer_address": { 00:19:26.155 "adrfam": "IPv4", 00:19:26.155 "traddr": "10.0.0.1", 00:19:26.155 "trsvcid": "46794", 00:19:26.155 "trtype": "TCP" 00:19:26.155 }, 00:19:26.155 "qid": 0, 00:19:26.155 "state": "enabled", 00:19:26.155 "thread": "nvmf_tgt_poll_group_000" 00:19:26.155 } 00:19:26.155 ]' 00:19:26.155 14:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:26.412 14:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:26.412 14:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:26.412 14:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:26.412 14:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:26.412 14:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.412 14:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.412 14:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:26.670 14:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTgyZDkxODc4ZTc5ZDU5MjY5MmViMWIyZmZmNGJhNjd47jto: --dhchap-ctrl-secret DHHC-1:02:ODFmNzk4OWZjOTY2OGZhZDUxMjE0NmY1MTBmOTc3ZjU0ZGRmNjFiYmM1YWMyMDQwWOc6bA==: 00:19:26.670 14:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid 9ca3beaa-4765-4780-96a0-c04fd97e7435 -l 0 --dhchap-secret DHHC-1:01:ZTgyZDkxODc4ZTc5ZDU5MjY5MmViMWIyZmZmNGJhNjd47jto: --dhchap-ctrl-secret DHHC-1:02:ODFmNzk4OWZjOTY2OGZhZDUxMjE0NmY1MTBmOTc3ZjU0ZGRmNjFiYmM1YWMyMDQwWOc6bA==: 00:19:27.235 14:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.235 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.235 14:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:19:27.235 14:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.235 14:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.235 14:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.235 14:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:27.235 14:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:27.235 14:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:27.493 14:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:19:27.493 14:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:27.493 14:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:27.493 14:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:27.493 14:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:27.493 14:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:27.493 14:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:27.493 14:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.493 14:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.493 14:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.493 14:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:27.493 14:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:27.493 14:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.059 00:19:28.059 14:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:28.059 14:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:28.059 14:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.059 14:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.059 14:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.059 14:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.059 14:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.059 14:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.059 14:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:28.059 { 00:19:28.059 "auth": { 00:19:28.059 "dhgroup": "ffdhe8192", 00:19:28.059 "digest": "sha512", 00:19:28.059 "state": "completed" 00:19:28.059 }, 00:19:28.059 "cntlid": 141, 00:19:28.059 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:19:28.059 "listen_address": { 00:19:28.059 "adrfam": "IPv4", 00:19:28.059 "traddr": "10.0.0.3", 00:19:28.059 "trsvcid": "4420", 00:19:28.059 "trtype": "TCP" 00:19:28.059 }, 00:19:28.059 "peer_address": { 00:19:28.059 "adrfam": "IPv4", 00:19:28.059 "traddr": "10.0.0.1", 00:19:28.059 "trsvcid": "49582", 00:19:28.059 "trtype": "TCP" 00:19:28.059 }, 00:19:28.059 "qid": 0, 00:19:28.059 "state": "enabled", 00:19:28.059 "thread": "nvmf_tgt_poll_group_000" 00:19:28.059 } 00:19:28.059 ]' 00:19:28.059 14:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:28.059 14:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:28.059 14:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:28.317 14:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:28.317 14:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:28.317 14:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:28.317 14:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.317 14:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.574 14:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODFiN2I5NWNmYjgwZDNmYzliYzJkYTU4N2ZhN2I3MzE5NzhlYjUxMjIxZTQwNmY5EVCk5g==: --dhchap-ctrl-secret DHHC-1:01:ZmZhMjcwOTUwNzNhMWI1MDkxNGRiMmQ3NTQ1ZTNhYTH5sdOn: 00:19:28.575 14:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid 9ca3beaa-4765-4780-96a0-c04fd97e7435 -l 0 --dhchap-secret DHHC-1:02:ODFiN2I5NWNmYjgwZDNmYzliYzJkYTU4N2ZhN2I3MzE5NzhlYjUxMjIxZTQwNmY5EVCk5g==: --dhchap-ctrl-secret DHHC-1:01:ZmZhMjcwOTUwNzNhMWI1MDkxNGRiMmQ3NTQ1ZTNhYTH5sdOn: 00:19:29.141 14:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:29.141 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:29.141 14:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:19:29.141 14:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.141 14:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.141 14:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.141 14:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:29.141 14:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:29.141 14:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:29.141 14:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:19:29.141 14:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:29.141 14:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:29.141 14:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:29.141 14:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:29.141 14:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:29.141 14:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key3 00:19:29.141 14:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.141 14:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.141 14:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.141 14:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:29.141 14:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:29.141 14:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:29.706 00:19:29.706 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:29.706 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:29.706 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.964 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.964 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.964 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.964 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.964 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.964 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:29.964 { 00:19:29.964 "auth": { 00:19:29.964 "dhgroup": "ffdhe8192", 00:19:29.964 "digest": "sha512", 00:19:29.964 "state": "completed" 00:19:29.964 }, 00:19:29.964 "cntlid": 143, 00:19:29.964 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:19:29.964 "listen_address": { 00:19:29.964 "adrfam": "IPv4", 00:19:29.964 "traddr": "10.0.0.3", 00:19:29.964 "trsvcid": "4420", 00:19:29.964 "trtype": "TCP" 00:19:29.964 }, 00:19:29.964 "peer_address": { 00:19:29.964 "adrfam": "IPv4", 00:19:29.964 "traddr": "10.0.0.1", 00:19:29.964 "trsvcid": "49608", 00:19:29.964 "trtype": "TCP" 00:19:29.964 }, 00:19:29.964 "qid": 0, 00:19:29.964 "state": "enabled", 00:19:29.964 "thread": "nvmf_tgt_poll_group_000" 00:19:29.964 } 00:19:29.964 ]' 00:19:29.964 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:29.964 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:29.964 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:29.964 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:29.964 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:29.964 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.964 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.964 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.222 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmM3YWI1MDhlMGY2ZjYwNDExYjcyNGIzMjYzNjkxZmIxMjY5YmE3NGViYzYwZjQyNjVkNGZkNjA3NTk4NWY5MVH/20U=: 00:19:30.222 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid 9ca3beaa-4765-4780-96a0-c04fd97e7435 -l 0 --dhchap-secret DHHC-1:03:ZmM3YWI1MDhlMGY2ZjYwNDExYjcyNGIzMjYzNjkxZmIxMjY5YmE3NGViYzYwZjQyNjVkNGZkNjA3NTk4NWY5MVH/20U=: 00:19:30.788 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.788 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.788 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:19:30.788 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.788 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.046 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.046 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:19:31.046 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:19:31.046 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:19:31.046 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:31.046 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:31.046 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:31.046 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:19:31.046 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:31.046 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:31.046 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:31.046 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:31.046 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.046 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.046 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.046 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.046 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.046 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.046 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.046 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.612 00:19:31.871 14:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:31.871 14:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:31.871 14:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.871 14:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.871 14:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.871 14:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.871 14:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.871 14:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.871 14:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:31.871 { 00:19:31.871 "auth": { 00:19:31.871 "dhgroup": "ffdhe8192", 00:19:31.871 "digest": "sha512", 00:19:31.871 "state": "completed" 00:19:31.871 }, 00:19:31.871 "cntlid": 145, 00:19:31.871 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:19:31.871 "listen_address": { 00:19:31.871 "adrfam": "IPv4", 00:19:31.871 "traddr": "10.0.0.3", 00:19:31.871 "trsvcid": "4420", 00:19:31.871 "trtype": "TCP" 00:19:31.871 }, 00:19:31.871 "peer_address": { 00:19:31.871 "adrfam": "IPv4", 00:19:31.871 "traddr": "10.0.0.1", 00:19:31.871 "trsvcid": "49638", 00:19:31.871 "trtype": "TCP" 00:19:31.871 }, 00:19:31.871 "qid": 0, 00:19:31.871 "state": "enabled", 00:19:31.871 "thread": "nvmf_tgt_poll_group_000" 00:19:31.871 } 00:19:31.871 ]' 00:19:31.871 14:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:31.871 14:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:31.871 14:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:32.129 14:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:32.129 14:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:32.129 14:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:32.129 14:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:32.129 14:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.129 14:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDA3YTNlNWY1MTIxNDU1ZjUyNDViNjM3M2FjMmI4OTg4Nzc2MDI0NDliY2U1MGJisiJ9Hw==: --dhchap-ctrl-secret DHHC-1:03:ZTAwZGFiOTE0OWMxODdkYTlmMTIxNjBhOGQzYWQ2ZGIxMjE3NjI0ODM0OTY4OGUxMjljYWU1MTVkNjlmNDQ4N2pyDWw=: 00:19:32.129 14:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid 9ca3beaa-4765-4780-96a0-c04fd97e7435 -l 0 --dhchap-secret DHHC-1:00:ZDA3YTNlNWY1MTIxNDU1ZjUyNDViNjM3M2FjMmI4OTg4Nzc2MDI0NDliY2U1MGJisiJ9Hw==: --dhchap-ctrl-secret DHHC-1:03:ZTAwZGFiOTE0OWMxODdkYTlmMTIxNjBhOGQzYWQ2ZGIxMjE3NjI0ODM0OTY4OGUxMjljYWU1MTVkNjlmNDQ4N2pyDWw=: 00:19:33.061 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:33.061 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:33.061 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:19:33.061 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.061 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.061 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.061 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key1 00:19:33.061 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.061 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.061 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.061 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:19:33.061 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:33.061 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:19:33.061 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:19:33.062 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:33.062 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:19:33.062 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:33.062 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:19:33.062 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:19:33.062 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:19:33.320 2024/11/06 14:24:00 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:33.320 request: 00:19:33.320 { 00:19:33.320 "method": "bdev_nvme_attach_controller", 00:19:33.320 "params": { 00:19:33.320 "name": "nvme0", 00:19:33.320 "trtype": "tcp", 00:19:33.320 "traddr": "10.0.0.3", 00:19:33.320 "adrfam": "ipv4", 00:19:33.320 "trsvcid": "4420", 00:19:33.320 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:33.320 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:19:33.320 "prchk_reftag": false, 00:19:33.320 "prchk_guard": false, 00:19:33.320 "hdgst": false, 00:19:33.320 "ddgst": false, 00:19:33.320 "dhchap_key": "key2", 00:19:33.321 "allow_unrecognized_csi": false 00:19:33.321 } 00:19:33.321 } 00:19:33.321 Got JSON-RPC error response 00:19:33.321 GoRPCClient: error on JSON-RPC call 00:19:33.321 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:33.321 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:33.321 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:33.321 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:33.321 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:19:33.321 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.321 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.579 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.579 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.579 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.579 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.579 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.579 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:33.579 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:33.579 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:33.579 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:19:33.579 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:33.579 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:19:33.579 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:33.579 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:33.579 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:33.579 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:33.837 2024/11/06 14:24:01 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:33.837 request: 00:19:33.837 { 00:19:33.837 "method": "bdev_nvme_attach_controller", 00:19:33.837 "params": { 00:19:33.837 "name": "nvme0", 00:19:33.837 "trtype": "tcp", 00:19:33.837 "traddr": "10.0.0.3", 00:19:33.837 "adrfam": "ipv4", 00:19:33.837 "trsvcid": "4420", 00:19:33.837 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:33.837 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:19:33.837 "prchk_reftag": false, 00:19:33.837 "prchk_guard": false, 00:19:33.837 "hdgst": false, 00:19:33.837 "ddgst": false, 00:19:33.837 "dhchap_key": "key1", 00:19:33.837 "dhchap_ctrlr_key": "ckey2", 00:19:33.837 "allow_unrecognized_csi": false 00:19:33.837 } 00:19:33.837 } 00:19:33.837 Got JSON-RPC error response 00:19:33.837 GoRPCClient: error on JSON-RPC call 00:19:33.837 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:33.837 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:33.837 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:33.837 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:33.837 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:19:33.837 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.837 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.837 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.837 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key1 00:19:33.837 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.837 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.837 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.837 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.837 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:33.837 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.837 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:19:33.837 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:33.837 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:19:33.837 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:33.837 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.837 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.837 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:34.402 2024/11/06 14:24:01 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey1 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:34.402 request: 00:19:34.402 { 00:19:34.402 "method": "bdev_nvme_attach_controller", 00:19:34.402 "params": { 00:19:34.402 "name": "nvme0", 00:19:34.402 "trtype": "tcp", 00:19:34.402 "traddr": "10.0.0.3", 00:19:34.402 "adrfam": "ipv4", 00:19:34.402 "trsvcid": "4420", 00:19:34.402 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:34.402 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:19:34.402 "prchk_reftag": false, 00:19:34.402 "prchk_guard": false, 00:19:34.402 "hdgst": false, 00:19:34.402 "ddgst": false, 00:19:34.402 "dhchap_key": "key1", 00:19:34.402 "dhchap_ctrlr_key": "ckey1", 00:19:34.402 "allow_unrecognized_csi": false 00:19:34.402 } 00:19:34.402 } 00:19:34.402 Got JSON-RPC error response 00:19:34.402 GoRPCClient: error on JSON-RPC call 00:19:34.402 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:34.402 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:34.402 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:34.402 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:34.402 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:19:34.402 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.402 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.402 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.402 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 82215 00:19:34.402 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 82215 ']' 00:19:34.402 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 82215 00:19:34.402 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:19:34.402 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:34.402 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 82215 00:19:34.402 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:34.402 killing process with pid 82215 00:19:34.402 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:34.402 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 82215' 00:19:34.402 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 82215 00:19:34.402 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 82215 00:19:34.967 14:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:19:34.967 14:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:34.967 14:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:34.967 14:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.967 14:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=86795 00:19:34.967 14:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 86795 00:19:34.967 14:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 86795 ']' 00:19:34.967 14:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:34.967 14:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:34.967 14:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:19:34.967 14:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:34.967 14:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:34.967 14:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.901 14:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:35.901 14:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:19:35.901 14:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:35.901 14:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:35.901 14:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.901 14:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:35.901 14:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:35.901 14:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 86795 00:19:35.901 14:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 86795 ']' 00:19:35.902 14:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:35.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:35.902 14:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:35.902 14:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:35.902 14:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:35.902 14:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.159 14:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:36.159 14:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:19:36.159 14:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:19:36.159 14:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.159 14:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.417 null0 00:19:36.417 14:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.417 14:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:36.417 14:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.ben 00:19:36.417 14:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.417 14:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.417 14:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.417 14:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.ggZ ]] 00:19:36.417 14:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ggZ 00:19:36.417 14:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.417 14:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.417 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.417 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:36.417 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Bc8 00:19:36.417 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.417 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.417 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.417 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.eRo ]] 00:19:36.417 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.eRo 00:19:36.417 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.417 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.417 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.417 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:36.417 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.KdJ 00:19:36.417 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.417 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.417 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.417 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.2DT ]] 00:19:36.417 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.2DT 00:19:36.417 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.417 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.417 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.417 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:36.417 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.GhX 00:19:36.417 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.417 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.417 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.417 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:19:36.417 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:19:36.417 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:36.417 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:36.417 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:36.417 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:36.417 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:36.418 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key3 00:19:36.418 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.418 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.418 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.418 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:36.418 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:36.675 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:37.609 nvme0n1 00:19:37.609 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:37.609 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.609 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:37.609 14:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.609 14:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:37.609 14:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.609 14:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.609 14:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.609 14:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:37.609 { 00:19:37.609 "auth": { 00:19:37.609 "dhgroup": "ffdhe8192", 00:19:37.609 "digest": "sha512", 00:19:37.609 "state": "completed" 00:19:37.609 }, 00:19:37.609 "cntlid": 1, 00:19:37.609 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:19:37.609 "listen_address": { 00:19:37.609 "adrfam": "IPv4", 00:19:37.609 "traddr": "10.0.0.3", 00:19:37.609 "trsvcid": "4420", 00:19:37.609 "trtype": "TCP" 00:19:37.609 }, 00:19:37.609 "peer_address": { 00:19:37.609 "adrfam": "IPv4", 00:19:37.609 "traddr": "10.0.0.1", 00:19:37.609 "trsvcid": "49690", 00:19:37.609 "trtype": "TCP" 00:19:37.609 }, 00:19:37.609 "qid": 0, 00:19:37.609 "state": "enabled", 00:19:37.609 "thread": "nvmf_tgt_poll_group_000" 00:19:37.609 } 00:19:37.609 ]' 00:19:37.609 14:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:37.609 14:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:37.609 14:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:37.609 14:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:37.609 14:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:37.868 14:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:37.868 14:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:37.868 14:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:37.868 14:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmM3YWI1MDhlMGY2ZjYwNDExYjcyNGIzMjYzNjkxZmIxMjY5YmE3NGViYzYwZjQyNjVkNGZkNjA3NTk4NWY5MVH/20U=: 00:19:37.868 14:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid 9ca3beaa-4765-4780-96a0-c04fd97e7435 -l 0 --dhchap-secret DHHC-1:03:ZmM3YWI1MDhlMGY2ZjYwNDExYjcyNGIzMjYzNjkxZmIxMjY5YmE3NGViYzYwZjQyNjVkNGZkNjA3NTk4NWY5MVH/20U=: 00:19:38.803 14:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:38.803 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:38.803 14:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:19:38.803 14:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.803 14:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.803 14:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.803 14:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key3 00:19:38.803 14:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.803 14:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.803 14:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.803 14:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:19:38.803 14:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:19:38.803 14:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:19:38.803 14:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:38.803 14:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:19:38.803 14:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:19:38.803 14:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:38.803 14:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:19:38.803 14:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:38.803 14:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:38.803 14:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:38.803 14:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:39.061 2024/11/06 14:24:06 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:39.061 request: 00:19:39.061 { 00:19:39.061 "method": "bdev_nvme_attach_controller", 00:19:39.061 "params": { 00:19:39.061 "name": "nvme0", 00:19:39.061 "trtype": "tcp", 00:19:39.061 "traddr": "10.0.0.3", 00:19:39.061 "adrfam": "ipv4", 00:19:39.061 "trsvcid": "4420", 00:19:39.061 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:39.061 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:19:39.061 "prchk_reftag": false, 00:19:39.061 "prchk_guard": false, 00:19:39.061 "hdgst": false, 00:19:39.061 "ddgst": false, 00:19:39.061 "dhchap_key": "key3", 00:19:39.061 "allow_unrecognized_csi": false 00:19:39.061 } 00:19:39.061 } 00:19:39.061 Got JSON-RPC error response 00:19:39.061 GoRPCClient: error on JSON-RPC call 00:19:39.061 14:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:39.061 14:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:39.061 14:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:39.061 14:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:39.061 14:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:19:39.061 14:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:19:39.061 14:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:39.061 14:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:39.320 14:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:19:39.320 14:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:39.320 14:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:19:39.320 14:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:19:39.320 14:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:39.320 14:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:19:39.320 14:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:39.320 14:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:39.320 14:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:39.320 14:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:39.578 2024/11/06 14:24:07 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:39.578 request: 00:19:39.578 { 00:19:39.578 "method": "bdev_nvme_attach_controller", 00:19:39.578 "params": { 00:19:39.578 "name": "nvme0", 00:19:39.578 "trtype": "tcp", 00:19:39.578 "traddr": "10.0.0.3", 00:19:39.578 "adrfam": "ipv4", 00:19:39.578 "trsvcid": "4420", 00:19:39.578 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:39.578 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:19:39.578 "prchk_reftag": false, 00:19:39.578 "prchk_guard": false, 00:19:39.578 "hdgst": false, 00:19:39.578 "ddgst": false, 00:19:39.578 "dhchap_key": "key3", 00:19:39.578 "allow_unrecognized_csi": false 00:19:39.578 } 00:19:39.578 } 00:19:39.578 Got JSON-RPC error response 00:19:39.578 GoRPCClient: error on JSON-RPC call 00:19:39.578 14:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:39.578 14:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:39.579 14:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:39.579 14:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:39.579 14:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:19:39.579 14:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:19:39.579 14:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:19:39.579 14:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:39.579 14:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:39.579 14:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:39.872 14:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:19:39.872 14:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.873 14:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.873 14:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.873 14:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:19:39.873 14:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.873 14:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.873 14:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.873 14:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:39.873 14:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:39.873 14:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:39.873 14:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:19:39.873 14:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:39.873 14:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:19:39.873 14:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:39.873 14:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:39.873 14:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:39.873 14:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:40.131 2024/11/06 14:24:07 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:key1 dhchap_key:key0 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:40.131 request: 00:19:40.131 { 00:19:40.131 "method": "bdev_nvme_attach_controller", 00:19:40.131 "params": { 00:19:40.131 "name": "nvme0", 00:19:40.131 "trtype": "tcp", 00:19:40.131 "traddr": "10.0.0.3", 00:19:40.131 "adrfam": "ipv4", 00:19:40.131 "trsvcid": "4420", 00:19:40.131 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:40.131 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:19:40.131 "prchk_reftag": false, 00:19:40.131 "prchk_guard": false, 00:19:40.131 "hdgst": false, 00:19:40.131 "ddgst": false, 00:19:40.131 "dhchap_key": "key0", 00:19:40.131 "dhchap_ctrlr_key": "key1", 00:19:40.131 "allow_unrecognized_csi": false 00:19:40.131 } 00:19:40.131 } 00:19:40.131 Got JSON-RPC error response 00:19:40.131 GoRPCClient: error on JSON-RPC call 00:19:40.131 14:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:40.131 14:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:40.131 14:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:40.131 14:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:40.131 14:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:19:40.131 14:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:19:40.131 14:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:19:40.389 nvme0n1 00:19:40.389 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:19:40.389 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:19:40.389 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.647 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.647 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.647 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.905 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key1 00:19:40.905 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.905 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.905 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.905 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:19:40.905 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:40.905 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:41.838 nvme0n1 00:19:41.838 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:19:41.838 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:19:41.838 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.095 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.095 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:42.095 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.095 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.095 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.095 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:19:42.095 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:19:42.095 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.353 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.353 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ODFiN2I5NWNmYjgwZDNmYzliYzJkYTU4N2ZhN2I3MzE5NzhlYjUxMjIxZTQwNmY5EVCk5g==: --dhchap-ctrl-secret DHHC-1:03:ZmM3YWI1MDhlMGY2ZjYwNDExYjcyNGIzMjYzNjkxZmIxMjY5YmE3NGViYzYwZjQyNjVkNGZkNjA3NTk4NWY5MVH/20U=: 00:19:42.353 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid 9ca3beaa-4765-4780-96a0-c04fd97e7435 -l 0 --dhchap-secret DHHC-1:02:ODFiN2I5NWNmYjgwZDNmYzliYzJkYTU4N2ZhN2I3MzE5NzhlYjUxMjIxZTQwNmY5EVCk5g==: --dhchap-ctrl-secret DHHC-1:03:ZmM3YWI1MDhlMGY2ZjYwNDExYjcyNGIzMjYzNjkxZmIxMjY5YmE3NGViYzYwZjQyNjVkNGZkNjA3NTk4NWY5MVH/20U=: 00:19:42.919 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:19:42.919 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:19:42.919 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:19:42.919 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:19:42.919 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:19:42.919 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:19:42.919 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:19:42.919 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.919 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.176 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:19:43.177 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:43.177 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:19:43.177 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:19:43.177 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:43.177 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:19:43.177 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:43.177 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:19:43.177 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:43.177 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:43.741 2024/11/06 14:24:11 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:43.741 request: 00:19:43.741 { 00:19:43.741 "method": "bdev_nvme_attach_controller", 00:19:43.741 "params": { 00:19:43.741 "name": "nvme0", 00:19:43.741 "trtype": "tcp", 00:19:43.741 "traddr": "10.0.0.3", 00:19:43.741 "adrfam": "ipv4", 00:19:43.741 "trsvcid": "4420", 00:19:43.741 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:43.741 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435", 00:19:43.741 "prchk_reftag": false, 00:19:43.741 "prchk_guard": false, 00:19:43.741 "hdgst": false, 00:19:43.741 "ddgst": false, 00:19:43.741 "dhchap_key": "key1", 00:19:43.741 "allow_unrecognized_csi": false 00:19:43.741 } 00:19:43.741 } 00:19:43.741 Got JSON-RPC error response 00:19:43.741 GoRPCClient: error on JSON-RPC call 00:19:43.741 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:43.741 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:43.741 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:43.741 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:43.741 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:43.741 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:43.742 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:44.676 nvme0n1 00:19:44.676 14:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:19:44.676 14:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:19:44.676 14:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.676 14:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.676 14:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:44.676 14:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.934 14:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:19:44.934 14:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.934 14:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.934 14:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.934 14:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:19:44.934 14:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:19:44.934 14:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:19:45.193 nvme0n1 00:19:45.193 14:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:19:45.193 14:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.193 14:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:19:45.452 14:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.452 14:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.452 14:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.710 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:45.710 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.710 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.710 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.710 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ZTgyZDkxODc4ZTc5ZDU5MjY5MmViMWIyZmZmNGJhNjd47jto: '' 2s 00:19:45.710 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:19:45.710 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:19:45.710 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ZTgyZDkxODc4ZTc5ZDU5MjY5MmViMWIyZmZmNGJhNjd47jto: 00:19:45.710 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:19:45.710 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:19:45.710 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:19:45.710 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ZTgyZDkxODc4ZTc5ZDU5MjY5MmViMWIyZmZmNGJhNjd47jto: ]] 00:19:45.710 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ZTgyZDkxODc4ZTc5ZDU5MjY5MmViMWIyZmZmNGJhNjd47jto: 00:19:45.710 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:19:45.710 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:19:45.710 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:19:47.609 14:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:19:47.610 14:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:19:47.610 14:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:19:47.610 14:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:19:47.610 14:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:19:47.610 14:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:19:47.867 14:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:19:47.867 14:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key1 --dhchap-ctrlr-key key2 00:19:47.867 14:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.867 14:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.867 14:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.867 14:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ODFiN2I5NWNmYjgwZDNmYzliYzJkYTU4N2ZhN2I3MzE5NzhlYjUxMjIxZTQwNmY5EVCk5g==: 2s 00:19:47.867 14:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:19:47.867 14:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:19:47.867 14:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:19:47.867 14:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ODFiN2I5NWNmYjgwZDNmYzliYzJkYTU4N2ZhN2I3MzE5NzhlYjUxMjIxZTQwNmY5EVCk5g==: 00:19:47.867 14:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:19:47.868 14:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:19:47.868 14:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:19:47.868 14:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ODFiN2I5NWNmYjgwZDNmYzliYzJkYTU4N2ZhN2I3MzE5NzhlYjUxMjIxZTQwNmY5EVCk5g==: ]] 00:19:47.868 14:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ODFiN2I5NWNmYjgwZDNmYzliYzJkYTU4N2ZhN2I3MzE5NzhlYjUxMjIxZTQwNmY5EVCk5g==: 00:19:47.868 14:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:19:47.868 14:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:19:49.787 14:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:19:49.787 14:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:19:49.787 14:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:19:49.787 14:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:19:49.787 14:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:19:49.787 14:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:19:49.787 14:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:19:49.787 14:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:49.787 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:49.787 14:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:49.787 14:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.787 14:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.787 14:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.787 14:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:49.787 14:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:49.787 14:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:50.720 nvme0n1 00:19:50.720 14:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:50.720 14:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.720 14:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.720 14:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.720 14:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:50.720 14:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:51.287 14:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:19:51.287 14:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:19:51.287 14:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.287 14:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.287 14:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:19:51.287 14:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.287 14:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.287 14:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.287 14:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:19:51.287 14:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:19:51.546 14:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:19:51.546 14:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.546 14:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:19:51.804 14:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.804 14:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:51.804 14:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.805 14:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.805 14:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.805 14:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:51.805 14:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:51.805 14:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:51.805 14:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:19:51.805 14:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:51.805 14:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:19:51.805 14:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:51.805 14:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:51.805 14:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:52.373 2024/11/06 14:24:19 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:key3 dhchap_key:key1 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:19:52.373 request: 00:19:52.373 { 00:19:52.373 "method": "bdev_nvme_set_keys", 00:19:52.373 "params": { 00:19:52.373 "name": "nvme0", 00:19:52.373 "dhchap_key": "key1", 00:19:52.373 "dhchap_ctrlr_key": "key3" 00:19:52.373 } 00:19:52.373 } 00:19:52.373 Got JSON-RPC error response 00:19:52.373 GoRPCClient: error on JSON-RPC call 00:19:52.373 14:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:52.373 14:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:52.373 14:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:52.373 14:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:52.373 14:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:19:52.373 14:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.373 14:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:19:52.630 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:19:52.630 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:19:53.562 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:19:53.562 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.562 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:19:53.821 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:19:53.821 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:53.821 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.821 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.821 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.821 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:53.821 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:53.821 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:54.753 nvme0n1 00:19:54.753 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:54.753 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.753 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.753 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.753 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:54.753 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:54.753 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:54.753 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:19:54.753 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:54.753 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:19:54.753 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:54.753 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:54.753 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:55.013 2024/11/06 14:24:22 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:key0 dhchap_key:key2 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:19:55.013 request: 00:19:55.013 { 00:19:55.013 "method": "bdev_nvme_set_keys", 00:19:55.013 "params": { 00:19:55.013 "name": "nvme0", 00:19:55.013 "dhchap_key": "key2", 00:19:55.013 "dhchap_ctrlr_key": "key0" 00:19:55.013 } 00:19:55.013 } 00:19:55.013 Got JSON-RPC error response 00:19:55.013 GoRPCClient: error on JSON-RPC call 00:19:55.272 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:55.272 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:55.272 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:55.272 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:55.272 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:19:55.272 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:19:55.272 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:55.272 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:19:55.272 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:19:56.645 14:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:19:56.645 14:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.645 14:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:19:56.645 14:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:19:56.645 14:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:19:56.645 14:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:19:56.645 14:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 82259 00:19:56.646 14:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 82259 ']' 00:19:56.646 14:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 82259 00:19:56.646 14:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:19:56.646 14:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:56.646 14:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 82259 00:19:56.646 killing process with pid 82259 00:19:56.646 14:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:56.646 14:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:56.646 14:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 82259' 00:19:56.646 14:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 82259 00:19:56.646 14:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 82259 00:19:58.018 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:19:58.018 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:58.018 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:19:58.018 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:58.018 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:19:58.018 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:58.018 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:58.018 rmmod nvme_tcp 00:19:58.018 rmmod nvme_fabrics 00:19:58.018 rmmod nvme_keyring 00:19:58.018 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:58.018 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:19:58.018 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:19:58.018 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 86795 ']' 00:19:58.018 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 86795 00:19:58.018 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 86795 ']' 00:19:58.018 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 86795 00:19:58.018 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:19:58.018 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:58.018 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 86795 00:19:58.018 killing process with pid 86795 00:19:58.018 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:58.018 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:58.018 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 86795' 00:19:58.018 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 86795 00:19:58.018 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 86795 00:19:58.585 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:58.585 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:58.585 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:58.585 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:19:58.585 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:19:58.585 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:58.585 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:19:58.585 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:58.585 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:58.585 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:58.585 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:58.585 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:58.585 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:58.585 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:58.585 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:58.585 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:58.585 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:58.585 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:58.585 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:58.585 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:58.585 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:58.585 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:58.585 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:58.585 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:58.585 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:58.585 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:58.585 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:19:58.585 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.ben /tmp/spdk.key-sha256.Bc8 /tmp/spdk.key-sha384.KdJ /tmp/spdk.key-sha512.GhX /tmp/spdk.key-sha512.ggZ /tmp/spdk.key-sha384.eRo /tmp/spdk.key-sha256.2DT '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:19:58.585 00:19:58.585 real 2m36.394s 00:19:58.585 user 6m11.235s 00:19:58.585 sys 0m16.222s 00:19:58.585 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:58.585 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.585 ************************************ 00:19:58.585 END TEST nvmf_auth_target 00:19:58.586 ************************************ 00:19:58.586 14:24:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:19:58.586 14:24:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:58.586 14:24:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:19:58.586 14:24:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:58.586 14:24:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:58.586 ************************************ 00:19:58.586 START TEST nvmf_bdevio_no_huge 00:19:58.586 ************************************ 00:19:58.586 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:58.845 * Looking for test storage... 00:19:58.845 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lcov --version 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:58.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:58.845 --rc genhtml_branch_coverage=1 00:19:58.845 --rc genhtml_function_coverage=1 00:19:58.845 --rc genhtml_legend=1 00:19:58.845 --rc geninfo_all_blocks=1 00:19:58.845 --rc geninfo_unexecuted_blocks=1 00:19:58.845 00:19:58.845 ' 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:58.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:58.845 --rc genhtml_branch_coverage=1 00:19:58.845 --rc genhtml_function_coverage=1 00:19:58.845 --rc genhtml_legend=1 00:19:58.845 --rc geninfo_all_blocks=1 00:19:58.845 --rc geninfo_unexecuted_blocks=1 00:19:58.845 00:19:58.845 ' 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:58.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:58.845 --rc genhtml_branch_coverage=1 00:19:58.845 --rc genhtml_function_coverage=1 00:19:58.845 --rc genhtml_legend=1 00:19:58.845 --rc geninfo_all_blocks=1 00:19:58.845 --rc geninfo_unexecuted_blocks=1 00:19:58.845 00:19:58.845 ' 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:58.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:58.845 --rc genhtml_branch_coverage=1 00:19:58.845 --rc genhtml_function_coverage=1 00:19:58.845 --rc genhtml_legend=1 00:19:58.845 --rc geninfo_all_blocks=1 00:19:58.845 --rc geninfo_unexecuted_blocks=1 00:19:58.845 00:19:58.845 ' 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:58.845 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:58.845 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:58.846 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:58.846 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:58.846 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:58.846 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:58.846 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:58.846 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:58.846 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:58.846 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:58.846 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:58.846 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:58.846 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:58.846 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:58.846 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:58.846 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:58.846 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:58.846 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:58.846 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:58.846 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:58.846 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:58.846 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:58.846 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:58.846 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:58.846 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:58.846 Cannot find device "nvmf_init_br" 00:19:58.846 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:19:58.846 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:58.846 Cannot find device "nvmf_init_br2" 00:19:58.846 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:19:58.846 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:58.846 Cannot find device "nvmf_tgt_br" 00:19:58.846 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:19:58.846 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:58.846 Cannot find device "nvmf_tgt_br2" 00:19:58.846 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:19:58.846 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:58.846 Cannot find device "nvmf_init_br" 00:19:58.846 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:19:58.846 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:58.846 Cannot find device "nvmf_init_br2" 00:19:58.846 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:19:58.846 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:58.846 Cannot find device "nvmf_tgt_br" 00:19:58.846 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:19:58.846 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:58.846 Cannot find device "nvmf_tgt_br2" 00:19:58.846 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:19:58.846 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:58.846 Cannot find device "nvmf_br" 00:19:58.846 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:19:58.846 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:58.846 Cannot find device "nvmf_init_if" 00:19:58.846 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:19:58.846 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:58.846 Cannot find device "nvmf_init_if2" 00:19:58.846 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:19:58.846 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:58.846 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:58.846 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:19:58.846 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:58.846 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:58.846 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:19:58.846 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:58.846 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:58.846 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:58.846 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:58.846 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:58.846 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:59.104 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:59.104 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:59.104 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:59.104 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:59.104 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:59.104 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:59.104 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:59.104 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:59.104 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:59.104 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:59.104 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:59.104 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:59.104 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:59.104 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:59.104 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:59.104 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:59.104 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:59.104 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:59.104 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:59.104 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:59.104 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:59.104 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:59.104 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:59.104 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:59.104 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:59.104 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:59.104 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:59.104 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:59.104 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:19:59.104 00:19:59.104 --- 10.0.0.3 ping statistics --- 00:19:59.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:59.104 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:19:59.104 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:59.104 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:59.104 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.033 ms 00:19:59.104 00:19:59.104 --- 10.0.0.4 ping statistics --- 00:19:59.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:59.104 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:19:59.104 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:59.104 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:59.104 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:19:59.104 00:19:59.104 --- 10.0.0.1 ping statistics --- 00:19:59.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:59.104 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:19:59.104 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:59.104 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:59.104 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.041 ms 00:19:59.104 00:19:59.104 --- 10.0.0.2 ping statistics --- 00:19:59.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:59.104 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:19:59.104 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:59.104 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@461 -- # return 0 00:19:59.104 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:59.104 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:59.104 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:59.104 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:59.104 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:59.104 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:59.104 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:59.104 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:59.104 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:59.104 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:59.104 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:59.104 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=87647 00:19:59.104 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 87647 00:19:59.105 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # '[' -z 87647 ']' 00:19:59.105 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:59.105 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:59.105 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:59.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:59.105 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:59.105 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:59.105 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:59.105 [2024-11-06 14:24:26.731010] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:19:59.105 [2024-11-06 14:24:26.731123] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:59.362 [2024-11-06 14:24:26.909631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:59.621 [2024-11-06 14:24:27.014837] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:59.621 [2024-11-06 14:24:27.014886] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:59.621 [2024-11-06 14:24:27.014896] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:59.621 [2024-11-06 14:24:27.014905] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:59.621 [2024-11-06 14:24:27.014912] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:59.621 [2024-11-06 14:24:27.015938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:19:59.621 [2024-11-06 14:24:27.016059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:19:59.621 [2024-11-06 14:24:27.016156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:59.621 [2024-11-06 14:24:27.016195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:20:00.189 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:00.189 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@866 -- # return 0 00:20:00.189 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:00.189 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:00.189 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:00.189 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:00.189 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:00.189 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.189 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:00.189 [2024-11-06 14:24:27.605543] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:00.189 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.189 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:00.189 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.189 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:00.189 Malloc0 00:20:00.189 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.189 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:00.189 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.189 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:00.189 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.189 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:00.189 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.189 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:00.189 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.189 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:00.189 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.189 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:00.189 [2024-11-06 14:24:27.679643] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:00.189 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.189 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:20:00.189 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:00.189 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:20:00.189 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:20:00.189 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:00.189 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:00.189 { 00:20:00.189 "params": { 00:20:00.189 "name": "Nvme$subsystem", 00:20:00.189 "trtype": "$TEST_TRANSPORT", 00:20:00.189 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:00.189 "adrfam": "ipv4", 00:20:00.189 "trsvcid": "$NVMF_PORT", 00:20:00.189 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:00.189 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:00.189 "hdgst": ${hdgst:-false}, 00:20:00.189 "ddgst": ${ddgst:-false} 00:20:00.189 }, 00:20:00.189 "method": "bdev_nvme_attach_controller" 00:20:00.189 } 00:20:00.189 EOF 00:20:00.189 )") 00:20:00.189 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:20:00.189 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:20:00.189 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:20:00.189 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:00.189 "params": { 00:20:00.189 "name": "Nvme1", 00:20:00.189 "trtype": "tcp", 00:20:00.189 "traddr": "10.0.0.3", 00:20:00.189 "adrfam": "ipv4", 00:20:00.189 "trsvcid": "4420", 00:20:00.189 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:00.189 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:00.189 "hdgst": false, 00:20:00.189 "ddgst": false 00:20:00.189 }, 00:20:00.189 "method": "bdev_nvme_attach_controller" 00:20:00.189 }' 00:20:00.189 [2024-11-06 14:24:27.749958] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:20:00.189 [2024-11-06 14:24:27.750377] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid87701 ] 00:20:00.448 [2024-11-06 14:24:27.929114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:00.448 [2024-11-06 14:24:28.038798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:00.448 [2024-11-06 14:24:28.038869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:00.448 [2024-11-06 14:24:28.038883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:01.015 I/O targets: 00:20:01.015 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:01.015 00:20:01.015 00:20:01.015 CUnit - A unit testing framework for C - Version 2.1-3 00:20:01.015 http://cunit.sourceforge.net/ 00:20:01.015 00:20:01.015 00:20:01.015 Suite: bdevio tests on: Nvme1n1 00:20:01.015 Test: blockdev write read block ...passed 00:20:01.015 Test: blockdev write zeroes read block ...passed 00:20:01.015 Test: blockdev write zeroes read no split ...passed 00:20:01.015 Test: blockdev write zeroes read split ...passed 00:20:01.015 Test: blockdev write zeroes read split partial ...passed 00:20:01.015 Test: blockdev reset ...[2024-11-06 14:24:28.526263] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:01.015 [2024-11-06 14:24:28.526378] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000029c00 (9): Bad file descriptor 00:20:01.015 [2024-11-06 14:24:28.546831] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:20:01.015 passed 00:20:01.015 Test: blockdev write read 8 blocks ...passed 00:20:01.015 Test: blockdev write read size > 128k ...passed 00:20:01.015 Test: blockdev write read invalid size ...passed 00:20:01.015 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:01.015 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:01.015 Test: blockdev write read max offset ...passed 00:20:01.273 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:01.273 Test: blockdev writev readv 8 blocks ...passed 00:20:01.273 Test: blockdev writev readv 30 x 1block ...passed 00:20:01.273 Test: blockdev writev readv block ...passed 00:20:01.273 Test: blockdev writev readv size > 128k ...passed 00:20:01.273 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:01.273 Test: blockdev comparev and writev ...[2024-11-06 14:24:28.721192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:01.273 [2024-11-06 14:24:28.721243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:01.273 [2024-11-06 14:24:28.721260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:01.273 [2024-11-06 14:24:28.721269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:01.273 [2024-11-06 14:24:28.721603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:01.273 [2024-11-06 14:24:28.721623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:01.273 [2024-11-06 14:24:28.721636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:01.273 [2024-11-06 14:24:28.721644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:01.273 [2024-11-06 14:24:28.721914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:01.273 [2024-11-06 14:24:28.721933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:01.273 [2024-11-06 14:24:28.721946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:01.273 [2024-11-06 14:24:28.721961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:01.273 [2024-11-06 14:24:28.722278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:01.273 [2024-11-06 14:24:28.722300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:01.273 [2024-11-06 14:24:28.722313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:01.273 [2024-11-06 14:24:28.722320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:01.273 passed 00:20:01.273 Test: blockdev nvme passthru rw ...passed 00:20:01.273 Test: blockdev nvme passthru vendor specific ...[2024-11-06 14:24:28.805319] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:01.273 [2024-11-06 14:24:28.805365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:01.273 [2024-11-06 14:24:28.805526] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:01.273 [2024-11-06 14:24:28.805538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:01.273 [2024-11-06 14:24:28.805779] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:01.273 [2024-11-06 14:24:28.805797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:01.273 [2024-11-06 14:24:28.805992] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:01.273 [2024-11-06 14:24:28.806009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:01.273 passed 00:20:01.273 Test: blockdev nvme admin passthru ...passed 00:20:01.273 Test: blockdev copy ...passed 00:20:01.273 00:20:01.273 Run Summary: Type Total Ran Passed Failed Inactive 00:20:01.273 suites 1 1 n/a 0 0 00:20:01.273 tests 23 23 23 0 0 00:20:01.273 asserts 152 152 152 0 n/a 00:20:01.273 00:20:01.273 Elapsed time = 0.959 seconds 00:20:01.839 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:01.839 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.839 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:01.839 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.839 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:01.839 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:20:01.839 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:01.839 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:20:01.839 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:01.839 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:20:01.839 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:01.839 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:01.839 rmmod nvme_tcp 00:20:01.839 rmmod nvme_fabrics 00:20:02.099 rmmod nvme_keyring 00:20:02.099 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:02.099 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:20:02.099 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:20:02.099 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 87647 ']' 00:20:02.099 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 87647 00:20:02.099 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # '[' -z 87647 ']' 00:20:02.099 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # kill -0 87647 00:20:02.099 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # uname 00:20:02.099 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:02.099 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 87647 00:20:02.099 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:20:02.099 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:20:02.099 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # echo 'killing process with pid 87647' 00:20:02.099 killing process with pid 87647 00:20:02.099 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@971 -- # kill 87647 00:20:02.099 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@976 -- # wait 87647 00:20:02.672 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:02.672 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:02.672 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:02.672 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:20:02.672 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:20:02.672 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:20:02.672 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:02.672 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:02.672 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:02.672 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:02.672 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:02.672 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:02.672 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:02.672 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:02.672 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:02.672 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:02.672 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:02.672 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:02.672 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:02.672 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:02.672 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:02.672 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:02.672 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:02.672 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:02.672 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:02.672 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:02.672 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:20:02.672 00:20:02.672 real 0m4.046s 00:20:02.672 user 0m14.826s 00:20:02.672 sys 0m1.217s 00:20:02.672 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:02.672 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:02.672 ************************************ 00:20:02.672 END TEST nvmf_bdevio_no_huge 00:20:02.672 ************************************ 00:20:02.672 14:24:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:02.672 14:24:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:02.672 14:24:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:02.672 14:24:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:02.672 ************************************ 00:20:02.672 START TEST nvmf_tls 00:20:02.672 ************************************ 00:20:02.672 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:02.934 * Looking for test storage... 00:20:02.934 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:02.934 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:02.934 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lcov --version 00:20:02.934 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:02.934 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:02.934 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:02.934 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:02.934 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:02.934 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:20:02.934 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:20:02.934 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:20:02.934 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:20:02.934 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:20:02.934 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:20:02.934 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:20:02.934 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:02.934 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:20:02.934 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:20:02.934 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:02.934 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:02.934 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:20:02.934 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:20:02.934 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:02.934 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:20:02.934 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:20:02.934 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:20:02.934 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:20:02.934 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:02.934 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:20:02.934 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:20:02.934 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:02.934 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:02.934 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:20:02.934 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:02.934 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:02.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:02.934 --rc genhtml_branch_coverage=1 00:20:02.934 --rc genhtml_function_coverage=1 00:20:02.934 --rc genhtml_legend=1 00:20:02.934 --rc geninfo_all_blocks=1 00:20:02.934 --rc geninfo_unexecuted_blocks=1 00:20:02.934 00:20:02.934 ' 00:20:02.934 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:02.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:02.934 --rc genhtml_branch_coverage=1 00:20:02.934 --rc genhtml_function_coverage=1 00:20:02.934 --rc genhtml_legend=1 00:20:02.934 --rc geninfo_all_blocks=1 00:20:02.934 --rc geninfo_unexecuted_blocks=1 00:20:02.934 00:20:02.934 ' 00:20:02.934 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:02.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:02.934 --rc genhtml_branch_coverage=1 00:20:02.934 --rc genhtml_function_coverage=1 00:20:02.934 --rc genhtml_legend=1 00:20:02.934 --rc geninfo_all_blocks=1 00:20:02.934 --rc geninfo_unexecuted_blocks=1 00:20:02.934 00:20:02.934 ' 00:20:02.934 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:02.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:02.934 --rc genhtml_branch_coverage=1 00:20:02.934 --rc genhtml_function_coverage=1 00:20:02.934 --rc genhtml_legend=1 00:20:02.934 --rc geninfo_all_blocks=1 00:20:02.934 --rc geninfo_unexecuted_blocks=1 00:20:02.934 00:20:02.934 ' 00:20:02.934 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:02.934 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:20:02.934 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:02.934 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:02.934 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:02.934 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:02.934 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:02.934 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:02.934 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:02.934 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:02.934 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:02.934 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:02.934 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:20:02.934 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:20:02.934 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:02.934 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:02.934 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:02.934 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:02.934 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:02.934 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:20:02.934 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:02.934 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:02.934 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:02.934 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.934 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.934 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.934 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:20:02.934 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.935 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:20:02.935 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:02.935 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:02.935 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:02.935 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:02.935 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:02.935 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:02.935 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:02.935 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:02.935 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:02.935 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:02.935 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:02.935 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:20:02.935 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:02.935 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:02.935 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:02.935 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:02.935 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:02.935 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:02.935 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:02.935 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:02.935 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:02.935 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:02.935 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:02.935 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:02.935 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:02.935 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:02.935 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:02.935 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:02.935 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:02.935 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:02.935 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:02.935 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:02.935 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:02.935 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:02.935 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:02.935 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:02.935 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:02.935 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:02.935 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:02.935 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:02.935 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:02.935 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:02.935 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:02.935 Cannot find device "nvmf_init_br" 00:20:02.935 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:20:02.935 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:02.935 Cannot find device "nvmf_init_br2" 00:20:02.935 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:20:02.935 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:02.935 Cannot find device "nvmf_tgt_br" 00:20:02.935 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:20:02.935 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:02.935 Cannot find device "nvmf_tgt_br2" 00:20:02.935 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:20:02.935 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:02.935 Cannot find device "nvmf_init_br" 00:20:02.935 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:20:02.935 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:02.935 Cannot find device "nvmf_init_br2" 00:20:02.935 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:20:02.935 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:02.935 Cannot find device "nvmf_tgt_br" 00:20:02.935 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:20:02.935 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:02.935 Cannot find device "nvmf_tgt_br2" 00:20:02.935 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:20:02.935 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:02.935 Cannot find device "nvmf_br" 00:20:02.935 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:20:02.935 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:02.935 Cannot find device "nvmf_init_if" 00:20:02.935 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:20:02.935 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:02.935 Cannot find device "nvmf_init_if2" 00:20:02.935 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:20:02.935 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:02.935 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:02.935 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:20:02.935 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:02.935 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:02.935 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:20:02.935 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:02.935 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:03.194 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:03.194 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:03.194 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:03.194 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:03.194 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:03.194 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:03.194 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:03.194 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:03.194 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:03.194 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:03.194 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:03.194 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:03.194 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:03.194 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:03.194 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:03.194 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:03.194 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:03.194 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:03.194 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:03.194 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:03.194 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:03.194 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:03.194 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:03.194 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:03.194 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:03.194 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:03.194 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:03.194 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:03.194 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:03.195 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:03.195 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:03.195 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:03.195 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:20:03.195 00:20:03.195 --- 10.0.0.3 ping statistics --- 00:20:03.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:03.195 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:20:03.195 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:03.195 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:03.195 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:20:03.195 00:20:03.195 --- 10.0.0.4 ping statistics --- 00:20:03.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:03.195 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:20:03.195 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:03.195 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:03.195 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:20:03.195 00:20:03.195 --- 10.0.0.1 ping statistics --- 00:20:03.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:03.195 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:20:03.195 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:03.195 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:03.195 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:20:03.195 00:20:03.195 --- 10.0.0.2 ping statistics --- 00:20:03.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:03.195 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:20:03.195 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:03.195 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@461 -- # return 0 00:20:03.195 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:03.195 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:03.195 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:03.195 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:03.195 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:03.195 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:03.195 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:03.195 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:03.195 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:03.195 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:03.195 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:03.195 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=87948 00:20:03.195 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 87948 00:20:03.195 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 87948 ']' 00:20:03.195 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:03.195 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:03.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:03.195 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:03.195 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:03.195 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:03.195 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:03.195 [2024-11-06 14:24:30.822220] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:20:03.195 [2024-11-06 14:24:30.822333] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:03.454 [2024-11-06 14:24:30.981816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:03.454 [2024-11-06 14:24:31.077527] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:03.454 [2024-11-06 14:24:31.077577] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:03.454 [2024-11-06 14:24:31.077589] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:03.454 [2024-11-06 14:24:31.077605] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:03.454 [2024-11-06 14:24:31.077614] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:03.454 [2024-11-06 14:24:31.078460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:04.387 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:04.387 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:04.387 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:04.387 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:04.387 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:04.387 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:04.387 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:20:04.387 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:20:04.387 true 00:20:04.387 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:04.387 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:20:04.713 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:20:04.713 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:20:04.713 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:04.977 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:04.977 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:20:04.977 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:20:04.977 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:20:04.977 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:20:05.236 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:05.236 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:20:05.496 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:20:05.496 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:20:05.496 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:20:05.496 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:05.754 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:20:05.754 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:20:05.754 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:20:06.013 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:06.013 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:20:06.013 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:20:06.013 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:20:06.013 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:20:06.271 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:20:06.271 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:06.529 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:20:06.529 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:20:06.529 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:20:06.529 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:20:06.529 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:20:06.529 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:06.529 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:20:06.529 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:20:06.529 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:20:06.529 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:06.529 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:20:06.529 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:20:06.529 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:20:06.529 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:06.529 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:20:06.529 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:20:06.529 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:20:06.529 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:06.529 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:20:06.529 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.9ULJ6QqRXZ 00:20:06.529 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:20:06.529 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.bEooU5wntO 00:20:06.530 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:06.530 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:06.530 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.9ULJ6QqRXZ 00:20:06.530 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.bEooU5wntO 00:20:06.530 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:06.787 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:20:07.352 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.9ULJ6QqRXZ 00:20:07.352 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.9ULJ6QqRXZ 00:20:07.352 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:07.352 [2024-11-06 14:24:34.870250] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:07.352 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:07.610 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:20:07.869 [2024-11-06 14:24:35.274353] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:07.869 [2024-11-06 14:24:35.274551] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:07.869 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:07.869 malloc0 00:20:08.127 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:08.127 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.9ULJ6QqRXZ 00:20:08.385 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:08.644 14:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.9ULJ6QqRXZ 00:20:20.840 Initializing NVMe Controllers 00:20:20.840 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:20:20.840 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:20.840 Initialization complete. Launching workers. 00:20:20.840 ======================================================== 00:20:20.840 Latency(us) 00:20:20.840 Device Information : IOPS MiB/s Average min max 00:20:20.840 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 13581.19 53.05 4712.99 1609.92 15060.68 00:20:20.840 ======================================================== 00:20:20.840 Total : 13581.19 53.05 4712.99 1609.92 15060.68 00:20:20.840 00:20:20.840 14:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9ULJ6QqRXZ 00:20:20.840 14:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:20.840 14:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:20.840 14:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:20.840 14:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.9ULJ6QqRXZ 00:20:20.840 14:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:20.840 14:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=88308 00:20:20.840 14:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:20.840 14:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 88308 /var/tmp/bdevperf.sock 00:20:20.840 14:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 88308 ']' 00:20:20.840 14:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:20.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:20.840 14:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:20.840 14:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:20.840 14:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:20.840 14:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:20.840 14:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:20.840 [2024-11-06 14:24:46.472350] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:20:20.840 [2024-11-06 14:24:46.472865] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88308 ] 00:20:20.840 [2024-11-06 14:24:46.630382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.840 [2024-11-06 14:24:46.728931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:20.840 14:24:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:20.840 14:24:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:20.840 14:24:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.9ULJ6QqRXZ 00:20:20.840 14:24:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:20.840 [2024-11-06 14:24:47.712847] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:20.840 TLSTESTn1 00:20:20.840 14:24:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:20.840 Running I/O for 10 seconds... 00:20:22.338 5361.00 IOPS, 20.94 MiB/s [2024-11-06T14:24:50.908Z] 5557.50 IOPS, 21.71 MiB/s [2024-11-06T14:24:52.282Z] 5576.67 IOPS, 21.78 MiB/s [2024-11-06T14:24:53.216Z] 5535.50 IOPS, 21.62 MiB/s [2024-11-06T14:24:54.153Z] 5506.80 IOPS, 21.51 MiB/s [2024-11-06T14:24:55.088Z] 5438.33 IOPS, 21.24 MiB/s [2024-11-06T14:24:56.020Z] 5323.14 IOPS, 20.79 MiB/s [2024-11-06T14:24:56.951Z] 5259.25 IOPS, 20.54 MiB/s [2024-11-06T14:24:58.330Z] 5269.11 IOPS, 20.58 MiB/s [2024-11-06T14:24:58.330Z] 5284.50 IOPS, 20.64 MiB/s 00:20:30.695 Latency(us) 00:20:30.695 [2024-11-06T14:24:58.330Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:30.695 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:30.695 Verification LBA range: start 0x0 length 0x2000 00:20:30.695 TLSTESTn1 : 10.01 5290.98 20.67 0.00 0.00 24155.85 3226.39 27625.94 00:20:30.695 [2024-11-06T14:24:58.330Z] =================================================================================================================== 00:20:30.695 [2024-11-06T14:24:58.330Z] Total : 5290.98 20.67 0.00 0.00 24155.85 3226.39 27625.94 00:20:30.695 { 00:20:30.695 "results": [ 00:20:30.695 { 00:20:30.695 "job": "TLSTESTn1", 00:20:30.695 "core_mask": "0x4", 00:20:30.695 "workload": "verify", 00:20:30.695 "status": "finished", 00:20:30.695 "verify_range": { 00:20:30.695 "start": 0, 00:20:30.695 "length": 8192 00:20:30.695 }, 00:20:30.695 "queue_depth": 128, 00:20:30.695 "io_size": 4096, 00:20:30.695 "runtime": 10.01195, 00:20:30.695 "iops": 5290.977282147834, 00:20:30.695 "mibps": 20.667880008389975, 00:20:30.695 "io_failed": 0, 00:20:30.695 "io_timeout": 0, 00:20:30.695 "avg_latency_us": 24155.853104382637, 00:20:30.695 "min_latency_us": 3226.3876923076923, 00:20:30.695 "max_latency_us": 27625.944615384615 00:20:30.695 } 00:20:30.695 ], 00:20:30.695 "core_count": 1 00:20:30.695 } 00:20:30.695 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:30.695 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 88308 00:20:30.695 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 88308 ']' 00:20:30.695 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 88308 00:20:30.695 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:30.695 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:30.695 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 88308 00:20:30.695 killing process with pid 88308 00:20:30.695 Received shutdown signal, test time was about 10.000000 seconds 00:20:30.695 00:20:30.695 Latency(us) 00:20:30.695 [2024-11-06T14:24:58.330Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:30.695 [2024-11-06T14:24:58.330Z] =================================================================================================================== 00:20:30.695 [2024-11-06T14:24:58.330Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:30.695 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:20:30.695 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:20:30.695 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 88308' 00:20:30.695 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 88308 00:20:30.695 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 88308 00:20:31.262 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.bEooU5wntO 00:20:31.262 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:31.262 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.bEooU5wntO 00:20:31.262 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:31.262 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:31.262 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:31.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:31.262 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:31.262 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.bEooU5wntO 00:20:31.262 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:31.262 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:31.262 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:31.262 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.bEooU5wntO 00:20:31.262 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:31.262 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=88473 00:20:31.262 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:31.262 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 88473 /var/tmp/bdevperf.sock 00:20:31.262 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:31.262 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 88473 ']' 00:20:31.262 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:31.262 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:31.262 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:31.262 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:31.262 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:31.262 [2024-11-06 14:24:58.796708] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:20:31.262 [2024-11-06 14:24:58.796990] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88473 ] 00:20:31.520 [2024-11-06 14:24:58.960869] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:31.520 [2024-11-06 14:24:59.060281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:32.087 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:32.087 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:32.087 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.bEooU5wntO 00:20:32.344 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:32.602 [2024-11-06 14:25:00.050498] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:32.602 [2024-11-06 14:25:00.059065] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:32.602 [2024-11-06 14:25:00.059517] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (107): Transport endpoint is not connected 00:20:32.602 [2024-11-06 14:25:00.060498] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:20:32.602 [2024-11-06 14:25:00.061489] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:20:32.602 [2024-11-06 14:25:00.061517] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:20:32.602 [2024-11-06 14:25:00.061527] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:20:32.602 [2024-11-06 14:25:00.061542] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:20:32.602 2024/11/06 14:25:00 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:20:32.602 request: 00:20:32.602 { 00:20:32.602 "method": "bdev_nvme_attach_controller", 00:20:32.602 "params": { 00:20:32.602 "name": "TLSTEST", 00:20:32.602 "trtype": "tcp", 00:20:32.602 "traddr": "10.0.0.3", 00:20:32.602 "adrfam": "ipv4", 00:20:32.602 "trsvcid": "4420", 00:20:32.602 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:32.602 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:32.602 "prchk_reftag": false, 00:20:32.602 "prchk_guard": false, 00:20:32.602 "hdgst": false, 00:20:32.602 "ddgst": false, 00:20:32.602 "psk": "key0", 00:20:32.602 "allow_unrecognized_csi": false 00:20:32.602 } 00:20:32.602 } 00:20:32.602 Got JSON-RPC error response 00:20:32.602 GoRPCClient: error on JSON-RPC call 00:20:32.602 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 88473 00:20:32.602 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 88473 ']' 00:20:32.602 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 88473 00:20:32.602 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:32.602 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:32.602 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 88473 00:20:32.602 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:20:32.602 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:20:32.602 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 88473' 00:20:32.602 killing process with pid 88473 00:20:32.602 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 88473 00:20:32.602 Received shutdown signal, test time was about 10.000000 seconds 00:20:32.602 00:20:32.602 Latency(us) 00:20:32.602 [2024-11-06T14:25:00.237Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:32.602 [2024-11-06T14:25:00.237Z] =================================================================================================================== 00:20:32.602 [2024-11-06T14:25:00.238Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:32.603 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 88473 00:20:33.168 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:33.168 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:33.168 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:33.168 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:33.169 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:33.169 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.9ULJ6QqRXZ 00:20:33.169 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:33.169 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.9ULJ6QqRXZ 00:20:33.169 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:33.169 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:33.169 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:33.169 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:33.169 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.9ULJ6QqRXZ 00:20:33.169 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:33.169 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:33.169 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:33.169 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.9ULJ6QqRXZ 00:20:33.169 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:33.169 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=88532 00:20:33.169 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:33.169 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 88532 /var/tmp/bdevperf.sock 00:20:33.169 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 88532 ']' 00:20:33.169 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:33.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:33.169 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:33.169 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:33.169 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:33.169 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:33.169 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:33.169 [2024-11-06 14:25:00.743652] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:20:33.169 [2024-11-06 14:25:00.743765] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88532 ] 00:20:33.426 [2024-11-06 14:25:00.898792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:33.426 [2024-11-06 14:25:00.998307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:34.029 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:34.029 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:34.029 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.9ULJ6QqRXZ 00:20:34.287 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:20:34.546 [2024-11-06 14:25:01.979449] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:34.546 [2024-11-06 14:25:01.989016] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:34.546 [2024-11-06 14:25:01.989058] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:34.546 [2024-11-06 14:25:01.989098] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:34.546 [2024-11-06 14:25:01.989709] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (107): Transport endpoint is not connected 00:20:34.546 [2024-11-06 14:25:01.990686] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:20:34.546 [2024-11-06 14:25:01.991677] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:20:34.546 [2024-11-06 14:25:01.991708] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:20:34.546 [2024-11-06 14:25:01.991720] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:20:34.546 [2024-11-06 14:25:01.991738] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:20:34.546 2024/11/06 14:25:01 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:20:34.546 request: 00:20:34.546 { 00:20:34.546 "method": "bdev_nvme_attach_controller", 00:20:34.546 "params": { 00:20:34.546 "name": "TLSTEST", 00:20:34.546 "trtype": "tcp", 00:20:34.546 "traddr": "10.0.0.3", 00:20:34.546 "adrfam": "ipv4", 00:20:34.546 "trsvcid": "4420", 00:20:34.546 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:34.546 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:34.546 "prchk_reftag": false, 00:20:34.546 "prchk_guard": false, 00:20:34.546 "hdgst": false, 00:20:34.546 "ddgst": false, 00:20:34.546 "psk": "key0", 00:20:34.546 "allow_unrecognized_csi": false 00:20:34.546 } 00:20:34.546 } 00:20:34.546 Got JSON-RPC error response 00:20:34.546 GoRPCClient: error on JSON-RPC call 00:20:34.546 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 88532 00:20:34.546 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 88532 ']' 00:20:34.546 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 88532 00:20:34.546 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:34.546 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:34.546 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 88532 00:20:34.546 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:20:34.546 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:20:34.546 killing process with pid 88532 00:20:34.546 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 88532' 00:20:34.546 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 88532 00:20:34.546 Received shutdown signal, test time was about 10.000000 seconds 00:20:34.546 00:20:34.546 Latency(us) 00:20:34.546 [2024-11-06T14:25:02.181Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:34.546 [2024-11-06T14:25:02.181Z] =================================================================================================================== 00:20:34.546 [2024-11-06T14:25:02.181Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:34.546 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 88532 00:20:35.113 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:35.113 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:35.113 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:35.113 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:35.113 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:35.113 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.9ULJ6QqRXZ 00:20:35.113 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:35.113 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.9ULJ6QqRXZ 00:20:35.113 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:35.113 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:35.113 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:35.113 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:35.113 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.9ULJ6QqRXZ 00:20:35.113 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:35.113 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:35.113 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:35.113 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.9ULJ6QqRXZ 00:20:35.113 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:35.113 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=88586 00:20:35.113 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:35.113 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 88586 /var/tmp/bdevperf.sock 00:20:35.113 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:35.113 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 88586 ']' 00:20:35.113 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:35.114 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:35.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:35.114 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:35.114 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:35.114 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:35.114 [2024-11-06 14:25:02.697267] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:20:35.114 [2024-11-06 14:25:02.697354] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88586 ] 00:20:35.372 [2024-11-06 14:25:02.854326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:35.372 [2024-11-06 14:25:02.960514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:36.306 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:36.306 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:36.306 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.9ULJ6QqRXZ 00:20:36.306 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:36.564 [2024-11-06 14:25:03.971433] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:36.564 [2024-11-06 14:25:03.979606] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:36.564 [2024-11-06 14:25:03.979648] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:36.564 [2024-11-06 14:25:03.979714] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:36.564 [2024-11-06 14:25:03.980332] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (107): Transport endpoint is not connected 00:20:36.564 [2024-11-06 14:25:03.981305] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:20:36.564 [2024-11-06 14:25:03.982303] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:20:36.564 [2024-11-06 14:25:03.982334] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:20:36.564 [2024-11-06 14:25:03.982351] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:20:36.564 [2024-11-06 14:25:03.982367] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:20:36.564 2024/11/06 14:25:03 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:20:36.564 request: 00:20:36.564 { 00:20:36.564 "method": "bdev_nvme_attach_controller", 00:20:36.564 "params": { 00:20:36.564 "name": "TLSTEST", 00:20:36.564 "trtype": "tcp", 00:20:36.564 "traddr": "10.0.0.3", 00:20:36.564 "adrfam": "ipv4", 00:20:36.564 "trsvcid": "4420", 00:20:36.564 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:36.564 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:36.564 "prchk_reftag": false, 00:20:36.564 "prchk_guard": false, 00:20:36.564 "hdgst": false, 00:20:36.564 "ddgst": false, 00:20:36.564 "psk": "key0", 00:20:36.564 "allow_unrecognized_csi": false 00:20:36.564 } 00:20:36.564 } 00:20:36.564 Got JSON-RPC error response 00:20:36.564 GoRPCClient: error on JSON-RPC call 00:20:36.564 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 88586 00:20:36.564 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 88586 ']' 00:20:36.565 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 88586 00:20:36.565 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:36.565 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:36.565 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 88586 00:20:36.565 killing process with pid 88586 00:20:36.565 Received shutdown signal, test time was about 10.000000 seconds 00:20:36.565 00:20:36.565 Latency(us) 00:20:36.565 [2024-11-06T14:25:04.200Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:36.565 [2024-11-06T14:25:04.200Z] =================================================================================================================== 00:20:36.565 [2024-11-06T14:25:04.200Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:36.565 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:20:36.565 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:20:36.565 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 88586' 00:20:36.565 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 88586 00:20:36.565 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 88586 00:20:37.131 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:37.131 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:37.131 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:37.131 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:37.131 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:37.131 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:37.131 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:37.131 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:37.131 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:37.131 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:37.131 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:37.131 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:37.131 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:37.131 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:37.131 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:37.131 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:37.131 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:20:37.131 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:37.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:37.131 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=88645 00:20:37.131 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:37.131 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 88645 /var/tmp/bdevperf.sock 00:20:37.131 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 88645 ']' 00:20:37.131 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:37.131 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:37.131 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:37.131 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:37.131 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:37.131 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:37.389 [2024-11-06 14:25:04.787683] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:20:37.389 [2024-11-06 14:25:04.788244] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88645 ] 00:20:37.389 [2024-11-06 14:25:04.942721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:37.389 [2024-11-06 14:25:05.023185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:38.324 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:38.324 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:38.324 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:20:38.324 [2024-11-06 14:25:05.779218] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:20:38.324 [2024-11-06 14:25:05.779266] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:38.324 2024/11/06 14:25:05 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:20:38.324 request: 00:20:38.324 { 00:20:38.324 "method": "keyring_file_add_key", 00:20:38.324 "params": { 00:20:38.324 "name": "key0", 00:20:38.324 "path": "" 00:20:38.324 } 00:20:38.324 } 00:20:38.324 Got JSON-RPC error response 00:20:38.324 GoRPCClient: error on JSON-RPC call 00:20:38.324 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:38.582 [2024-11-06 14:25:05.991377] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:38.582 [2024-11-06 14:25:05.991434] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:20:38.582 2024/11/06 14:25:05 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-126 Msg=Required key not available 00:20:38.582 request: 00:20:38.582 { 00:20:38.582 "method": "bdev_nvme_attach_controller", 00:20:38.582 "params": { 00:20:38.582 "name": "TLSTEST", 00:20:38.582 "trtype": "tcp", 00:20:38.582 "traddr": "10.0.0.3", 00:20:38.582 "adrfam": "ipv4", 00:20:38.582 "trsvcid": "4420", 00:20:38.582 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:38.582 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:38.582 "prchk_reftag": false, 00:20:38.582 "prchk_guard": false, 00:20:38.582 "hdgst": false, 00:20:38.582 "ddgst": false, 00:20:38.582 "psk": "key0", 00:20:38.582 "allow_unrecognized_csi": false 00:20:38.582 } 00:20:38.582 } 00:20:38.582 Got JSON-RPC error response 00:20:38.582 GoRPCClient: error on JSON-RPC call 00:20:38.582 14:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 88645 00:20:38.582 14:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 88645 ']' 00:20:38.582 14:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 88645 00:20:38.582 14:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:38.582 14:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:38.582 14:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 88645 00:20:38.582 killing process with pid 88645 00:20:38.582 Received shutdown signal, test time was about 10.000000 seconds 00:20:38.582 00:20:38.582 Latency(us) 00:20:38.582 [2024-11-06T14:25:06.217Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:38.582 [2024-11-06T14:25:06.217Z] =================================================================================================================== 00:20:38.582 [2024-11-06T14:25:06.217Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:38.582 14:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:20:38.582 14:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:20:38.582 14:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 88645' 00:20:38.582 14:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 88645 00:20:38.582 14:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 88645 00:20:39.149 14:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:39.149 14:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:39.149 14:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:39.149 14:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:39.149 14:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:39.149 14:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 87948 00:20:39.149 14:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 87948 ']' 00:20:39.149 14:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 87948 00:20:39.149 14:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:39.149 14:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:39.149 14:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 87948 00:20:39.149 killing process with pid 87948 00:20:39.149 14:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:39.149 14:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:39.149 14:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 87948' 00:20:39.149 14:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 87948 00:20:39.149 14:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 87948 00:20:40.093 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:20:40.093 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:20:40.093 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:20:40.093 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:40.093 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:40.093 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:20:40.093 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:20:40.093 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:40.093 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:20:40.093 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.CilNb6IJmL 00:20:40.094 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:40.094 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.CilNb6IJmL 00:20:40.094 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:20:40.094 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:40.094 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:40.094 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:40.094 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=88726 00:20:40.094 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 88726 00:20:40.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:40.094 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 88726 ']' 00:20:40.094 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:40.094 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:40.094 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:40.094 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:40.094 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:40.094 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:40.094 [2024-11-06 14:25:07.519881] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:20:40.094 [2024-11-06 14:25:07.519988] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:40.094 [2024-11-06 14:25:07.670709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.352 [2024-11-06 14:25:07.753670] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:40.352 [2024-11-06 14:25:07.753719] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:40.352 [2024-11-06 14:25:07.753729] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:40.352 [2024-11-06 14:25:07.753742] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:40.352 [2024-11-06 14:25:07.753751] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:40.352 [2024-11-06 14:25:07.754461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:40.916 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:40.916 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:40.916 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:40.916 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:40.916 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:40.916 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:40.916 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.CilNb6IJmL 00:20:40.916 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.CilNb6IJmL 00:20:40.916 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:41.174 [2024-11-06 14:25:08.687791] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:41.174 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:41.431 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:20:41.689 [2024-11-06 14:25:09.107772] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:41.689 [2024-11-06 14:25:09.108117] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:41.689 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:41.945 malloc0 00:20:41.945 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:41.945 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.CilNb6IJmL 00:20:42.203 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:42.461 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.CilNb6IJmL 00:20:42.461 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:42.461 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:42.461 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:42.461 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.CilNb6IJmL 00:20:42.461 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:42.461 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:42.461 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=88830 00:20:42.461 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:42.461 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 88830 /var/tmp/bdevperf.sock 00:20:42.461 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 88830 ']' 00:20:42.461 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:42.461 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:42.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:42.461 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:42.461 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:42.461 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:42.461 [2024-11-06 14:25:09.943350] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:20:42.461 [2024-11-06 14:25:09.943445] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88830 ] 00:20:42.720 [2024-11-06 14:25:10.098376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:42.720 [2024-11-06 14:25:10.207026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:43.286 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:43.286 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:43.286 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.CilNb6IJmL 00:20:43.543 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:43.801 [2024-11-06 14:25:11.219566] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:43.801 TLSTESTn1 00:20:43.801 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:43.801 Running I/O for 10 seconds... 00:20:46.108 5159.00 IOPS, 20.15 MiB/s [2024-11-06T14:25:14.784Z] 5280.50 IOPS, 20.63 MiB/s [2024-11-06T14:25:15.719Z] 5249.33 IOPS, 20.51 MiB/s [2024-11-06T14:25:16.651Z] 5269.50 IOPS, 20.58 MiB/s [2024-11-06T14:25:17.586Z] 5315.20 IOPS, 20.76 MiB/s [2024-11-06T14:25:18.519Z] 5332.17 IOPS, 20.83 MiB/s [2024-11-06T14:25:19.452Z] 5296.57 IOPS, 20.69 MiB/s [2024-11-06T14:25:20.828Z] 5309.50 IOPS, 20.74 MiB/s [2024-11-06T14:25:21.762Z] 5330.89 IOPS, 20.82 MiB/s [2024-11-06T14:25:21.762Z] 5341.40 IOPS, 20.86 MiB/s 00:20:54.127 Latency(us) 00:20:54.127 [2024-11-06T14:25:21.762Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:54.127 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:54.127 Verification LBA range: start 0x0 length 0x2000 00:20:54.127 TLSTESTn1 : 10.01 5347.81 20.89 0.00 0.00 23899.76 3528.86 28029.24 00:20:54.127 [2024-11-06T14:25:21.762Z] =================================================================================================================== 00:20:54.127 [2024-11-06T14:25:21.762Z] Total : 5347.81 20.89 0.00 0.00 23899.76 3528.86 28029.24 00:20:54.127 { 00:20:54.127 "results": [ 00:20:54.127 { 00:20:54.127 "job": "TLSTESTn1", 00:20:54.127 "core_mask": "0x4", 00:20:54.127 "workload": "verify", 00:20:54.127 "status": "finished", 00:20:54.127 "verify_range": { 00:20:54.127 "start": 0, 00:20:54.127 "length": 8192 00:20:54.127 }, 00:20:54.127 "queue_depth": 128, 00:20:54.127 "io_size": 4096, 00:20:54.127 "runtime": 10.01157, 00:20:54.127 "iops": 5347.812580843964, 00:20:54.127 "mibps": 20.889892893921733, 00:20:54.127 "io_failed": 0, 00:20:54.127 "io_timeout": 0, 00:20:54.127 "avg_latency_us": 23899.756178960375, 00:20:54.127 "min_latency_us": 3528.8615384615387, 00:20:54.127 "max_latency_us": 28029.243076923078 00:20:54.127 } 00:20:54.127 ], 00:20:54.127 "core_count": 1 00:20:54.127 } 00:20:54.127 14:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:54.127 14:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 88830 00:20:54.127 14:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 88830 ']' 00:20:54.127 14:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 88830 00:20:54.127 14:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:54.127 14:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:54.127 14:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 88830 00:20:54.127 14:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:20:54.127 14:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:20:54.127 killing process with pid 88830 00:20:54.127 14:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 88830' 00:20:54.127 14:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 88830 00:20:54.127 Received shutdown signal, test time was about 10.000000 seconds 00:20:54.127 00:20:54.127 Latency(us) 00:20:54.127 [2024-11-06T14:25:21.762Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:54.127 [2024-11-06T14:25:21.762Z] =================================================================================================================== 00:20:54.127 [2024-11-06T14:25:21.762Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:54.127 14:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 88830 00:20:54.694 14:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.CilNb6IJmL 00:20:54.694 14:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.CilNb6IJmL 00:20:54.694 14:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:54.694 14:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.CilNb6IJmL 00:20:54.694 14:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:54.694 14:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:54.694 14:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:54.694 14:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:54.694 14:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.CilNb6IJmL 00:20:54.694 14:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:54.694 14:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:54.694 14:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:54.694 14:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.CilNb6IJmL 00:20:54.694 14:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:54.694 14:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=88996 00:20:54.694 14:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:54.694 14:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 88996 /var/tmp/bdevperf.sock 00:20:54.694 14:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 88996 ']' 00:20:54.694 14:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:54.694 14:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:54.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:54.694 14:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:54.694 14:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:54.694 14:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:54.694 14:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:54.953 [2024-11-06 14:25:22.343663] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:20:54.953 [2024-11-06 14:25:22.343775] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88996 ] 00:20:54.953 [2024-11-06 14:25:22.499373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:55.211 [2024-11-06 14:25:22.613392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:55.783 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:55.783 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:55.783 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.CilNb6IJmL 00:20:55.783 [2024-11-06 14:25:23.379200] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.CilNb6IJmL': 0100666 00:20:55.783 [2024-11-06 14:25:23.379255] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:55.783 2024/11/06 14:25:23 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.CilNb6IJmL], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:20:55.783 request: 00:20:55.783 { 00:20:55.783 "method": "keyring_file_add_key", 00:20:55.783 "params": { 00:20:55.783 "name": "key0", 00:20:55.783 "path": "/tmp/tmp.CilNb6IJmL" 00:20:55.783 } 00:20:55.783 } 00:20:55.783 Got JSON-RPC error response 00:20:55.783 GoRPCClient: error on JSON-RPC call 00:20:55.783 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:56.040 [2024-11-06 14:25:23.587393] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:56.040 [2024-11-06 14:25:23.587468] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:20:56.040 2024/11/06 14:25:23 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-126 Msg=Required key not available 00:20:56.040 request: 00:20:56.040 { 00:20:56.040 "method": "bdev_nvme_attach_controller", 00:20:56.040 "params": { 00:20:56.040 "name": "TLSTEST", 00:20:56.040 "trtype": "tcp", 00:20:56.040 "traddr": "10.0.0.3", 00:20:56.040 "adrfam": "ipv4", 00:20:56.041 "trsvcid": "4420", 00:20:56.041 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:56.041 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:56.041 "prchk_reftag": false, 00:20:56.041 "prchk_guard": false, 00:20:56.041 "hdgst": false, 00:20:56.041 "ddgst": false, 00:20:56.041 "psk": "key0", 00:20:56.041 "allow_unrecognized_csi": false 00:20:56.041 } 00:20:56.041 } 00:20:56.041 Got JSON-RPC error response 00:20:56.041 GoRPCClient: error on JSON-RPC call 00:20:56.041 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 88996 00:20:56.041 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 88996 ']' 00:20:56.041 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 88996 00:20:56.041 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:56.041 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:56.041 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 88996 00:20:56.041 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:20:56.041 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:20:56.041 killing process with pid 88996 00:20:56.041 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 88996' 00:20:56.041 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 88996 00:20:56.041 Received shutdown signal, test time was about 10.000000 seconds 00:20:56.041 00:20:56.041 Latency(us) 00:20:56.041 [2024-11-06T14:25:23.676Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:56.041 [2024-11-06T14:25:23.676Z] =================================================================================================================== 00:20:56.041 [2024-11-06T14:25:23.676Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:56.041 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 88996 00:20:56.975 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:56.975 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:56.975 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:56.975 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:56.975 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:56.975 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 88726 00:20:56.975 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 88726 ']' 00:20:56.975 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 88726 00:20:56.975 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:56.975 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:56.975 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 88726 00:20:56.975 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:56.975 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:56.975 killing process with pid 88726 00:20:56.975 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 88726' 00:20:56.975 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 88726 00:20:56.975 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 88726 00:20:57.543 14:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:20:57.543 14:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:57.543 14:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:57.543 14:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:57.543 14:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=89067 00:20:57.543 14:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 89067 00:20:57.543 14:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 89067 ']' 00:20:57.543 14:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:57.543 14:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:57.543 14:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:57.543 14:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:57.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:57.543 14:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:57.543 14:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:57.543 [2024-11-06 14:25:25.113728] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:20:57.543 [2024-11-06 14:25:25.113837] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:57.802 [2024-11-06 14:25:25.269870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:57.802 [2024-11-06 14:25:25.350680] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:57.802 [2024-11-06 14:25:25.350731] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:57.802 [2024-11-06 14:25:25.350741] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:57.802 [2024-11-06 14:25:25.350755] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:57.802 [2024-11-06 14:25:25.350763] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:57.802 [2024-11-06 14:25:25.351457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:58.368 14:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:58.368 14:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:58.368 14:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:58.368 14:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:58.368 14:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:58.626 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:58.626 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.CilNb6IJmL 00:20:58.626 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:58.626 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.CilNb6IJmL 00:20:58.626 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:20:58.626 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:58.626 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:20:58.626 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:58.626 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.CilNb6IJmL 00:20:58.626 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.CilNb6IJmL 00:20:58.626 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:58.626 [2024-11-06 14:25:26.192300] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:58.626 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:58.884 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:20:59.142 [2024-11-06 14:25:26.596395] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:59.142 [2024-11-06 14:25:26.596581] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:59.142 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:59.408 malloc0 00:20:59.408 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:59.677 14:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.CilNb6IJmL 00:20:59.677 [2024-11-06 14:25:27.284154] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.CilNb6IJmL': 0100666 00:20:59.677 [2024-11-06 14:25:27.284204] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:59.677 2024/11/06 14:25:27 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.CilNb6IJmL], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:20:59.677 request: 00:20:59.677 { 00:20:59.677 "method": "keyring_file_add_key", 00:20:59.677 "params": { 00:20:59.677 "name": "key0", 00:20:59.677 "path": "/tmp/tmp.CilNb6IJmL" 00:20:59.677 } 00:20:59.677 } 00:20:59.677 Got JSON-RPC error response 00:20:59.677 GoRPCClient: error on JSON-RPC call 00:20:59.677 14:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:59.935 [2024-11-06 14:25:27.488256] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:20:59.935 [2024-11-06 14:25:27.488324] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:20:59.935 2024/11/06 14:25:27 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:key0], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:20:59.935 request: 00:20:59.935 { 00:20:59.935 "method": "nvmf_subsystem_add_host", 00:20:59.935 "params": { 00:20:59.935 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:59.935 "host": "nqn.2016-06.io.spdk:host1", 00:20:59.935 "psk": "key0" 00:20:59.935 } 00:20:59.935 } 00:20:59.935 Got JSON-RPC error response 00:20:59.935 GoRPCClient: error on JSON-RPC call 00:20:59.935 14:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:59.935 14:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:59.935 14:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:59.935 14:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:59.935 14:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 89067 00:20:59.935 14:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 89067 ']' 00:20:59.935 14:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 89067 00:20:59.935 14:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:59.935 14:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:59.935 14:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 89067 00:20:59.935 14:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:59.935 14:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:59.935 killing process with pid 89067 00:20:59.935 14:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 89067' 00:20:59.935 14:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 89067 00:20:59.935 14:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 89067 00:21:00.867 14:25:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.CilNb6IJmL 00:21:00.867 14:25:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:21:00.868 14:25:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:00.868 14:25:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:00.868 14:25:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:00.868 14:25:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=89191 00:21:00.868 14:25:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 89191 00:21:00.868 14:25:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 89191 ']' 00:21:00.868 14:25:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:00.868 14:25:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:00.868 14:25:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:00.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:00.868 14:25:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:00.868 14:25:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:00.868 14:25:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:00.868 [2024-11-06 14:25:28.378497] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:21:00.868 [2024-11-06 14:25:28.378612] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:01.125 [2024-11-06 14:25:28.534851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.125 [2024-11-06 14:25:28.628798] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:01.125 [2024-11-06 14:25:28.628847] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:01.125 [2024-11-06 14:25:28.628859] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:01.125 [2024-11-06 14:25:28.628874] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:01.125 [2024-11-06 14:25:28.628884] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:01.125 [2024-11-06 14:25:28.629736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:01.692 14:25:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:01.692 14:25:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:21:01.692 14:25:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:01.692 14:25:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:01.692 14:25:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:01.692 14:25:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:01.692 14:25:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.CilNb6IJmL 00:21:01.692 14:25:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.CilNb6IJmL 00:21:01.692 14:25:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:01.950 [2024-11-06 14:25:29.444793] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:01.950 14:25:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:02.208 14:25:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:21:02.466 [2024-11-06 14:25:29.860870] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:02.466 [2024-11-06 14:25:29.861059] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:02.466 14:25:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:02.466 malloc0 00:21:02.723 14:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:02.723 14:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.CilNb6IJmL 00:21:02.982 14:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:03.240 14:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:03.240 14:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=89295 00:21:03.240 14:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:03.240 14:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 89295 /var/tmp/bdevperf.sock 00:21:03.240 14:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 89295 ']' 00:21:03.240 14:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:03.240 14:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:03.240 14:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:03.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:03.240 14:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:03.240 14:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:03.240 [2024-11-06 14:25:30.779976] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:21:03.240 [2024-11-06 14:25:30.780102] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89295 ] 00:21:03.498 [2024-11-06 14:25:30.938970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:03.498 [2024-11-06 14:25:31.034754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:04.064 14:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:04.064 14:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:21:04.064 14:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.CilNb6IJmL 00:21:04.322 14:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:04.580 [2024-11-06 14:25:32.012210] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:04.580 TLSTESTn1 00:21:04.580 14:25:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:21:04.839 14:25:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:21:04.839 "subsystems": [ 00:21:04.839 { 00:21:04.839 "subsystem": "keyring", 00:21:04.839 "config": [ 00:21:04.839 { 00:21:04.839 "method": "keyring_file_add_key", 00:21:04.839 "params": { 00:21:04.839 "name": "key0", 00:21:04.839 "path": "/tmp/tmp.CilNb6IJmL" 00:21:04.839 } 00:21:04.839 } 00:21:04.839 ] 00:21:04.839 }, 00:21:04.839 { 00:21:04.839 "subsystem": "iobuf", 00:21:04.839 "config": [ 00:21:04.839 { 00:21:04.839 "method": "iobuf_set_options", 00:21:04.839 "params": { 00:21:04.839 "enable_numa": false, 00:21:04.839 "large_bufsize": 135168, 00:21:04.839 "large_pool_count": 1024, 00:21:04.839 "small_bufsize": 8192, 00:21:04.839 "small_pool_count": 8192 00:21:04.839 } 00:21:04.839 } 00:21:04.839 ] 00:21:04.839 }, 00:21:04.839 { 00:21:04.839 "subsystem": "sock", 00:21:04.839 "config": [ 00:21:04.839 { 00:21:04.839 "method": "sock_set_default_impl", 00:21:04.839 "params": { 00:21:04.839 "impl_name": "posix" 00:21:04.839 } 00:21:04.839 }, 00:21:04.839 { 00:21:04.839 "method": "sock_impl_set_options", 00:21:04.839 "params": { 00:21:04.839 "enable_ktls": false, 00:21:04.839 "enable_placement_id": 0, 00:21:04.839 "enable_quickack": false, 00:21:04.839 "enable_recv_pipe": true, 00:21:04.839 "enable_zerocopy_send_client": false, 00:21:04.839 "enable_zerocopy_send_server": true, 00:21:04.839 "impl_name": "ssl", 00:21:04.839 "recv_buf_size": 4096, 00:21:04.839 "send_buf_size": 4096, 00:21:04.839 "tls_version": 0, 00:21:04.839 "zerocopy_threshold": 0 00:21:04.839 } 00:21:04.839 }, 00:21:04.839 { 00:21:04.839 "method": "sock_impl_set_options", 00:21:04.839 "params": { 00:21:04.839 "enable_ktls": false, 00:21:04.839 "enable_placement_id": 0, 00:21:04.839 "enable_quickack": false, 00:21:04.839 "enable_recv_pipe": true, 00:21:04.839 "enable_zerocopy_send_client": false, 00:21:04.839 "enable_zerocopy_send_server": true, 00:21:04.839 "impl_name": "posix", 00:21:04.839 "recv_buf_size": 2097152, 00:21:04.839 "send_buf_size": 2097152, 00:21:04.839 "tls_version": 0, 00:21:04.839 "zerocopy_threshold": 0 00:21:04.839 } 00:21:04.839 } 00:21:04.839 ] 00:21:04.839 }, 00:21:04.839 { 00:21:04.839 "subsystem": "vmd", 00:21:04.839 "config": [] 00:21:04.839 }, 00:21:04.839 { 00:21:04.839 "subsystem": "accel", 00:21:04.839 "config": [ 00:21:04.839 { 00:21:04.839 "method": "accel_set_options", 00:21:04.839 "params": { 00:21:04.839 "buf_count": 2048, 00:21:04.839 "large_cache_size": 16, 00:21:04.839 "sequence_count": 2048, 00:21:04.839 "small_cache_size": 128, 00:21:04.839 "task_count": 2048 00:21:04.839 } 00:21:04.839 } 00:21:04.839 ] 00:21:04.839 }, 00:21:04.839 { 00:21:04.839 "subsystem": "bdev", 00:21:04.839 "config": [ 00:21:04.839 { 00:21:04.839 "method": "bdev_set_options", 00:21:04.839 "params": { 00:21:04.839 "bdev_auto_examine": true, 00:21:04.839 "bdev_io_cache_size": 256, 00:21:04.839 "bdev_io_pool_size": 65535, 00:21:04.839 "iobuf_large_cache_size": 16, 00:21:04.839 "iobuf_small_cache_size": 128 00:21:04.839 } 00:21:04.839 }, 00:21:04.839 { 00:21:04.839 "method": "bdev_raid_set_options", 00:21:04.839 "params": { 00:21:04.839 "process_max_bandwidth_mb_sec": 0, 00:21:04.839 "process_window_size_kb": 1024 00:21:04.839 } 00:21:04.839 }, 00:21:04.839 { 00:21:04.839 "method": "bdev_iscsi_set_options", 00:21:04.839 "params": { 00:21:04.839 "timeout_sec": 30 00:21:04.839 } 00:21:04.839 }, 00:21:04.839 { 00:21:04.839 "method": "bdev_nvme_set_options", 00:21:04.839 "params": { 00:21:04.839 "action_on_timeout": "none", 00:21:04.839 "allow_accel_sequence": false, 00:21:04.839 "arbitration_burst": 0, 00:21:04.839 "bdev_retry_count": 3, 00:21:04.839 "ctrlr_loss_timeout_sec": 0, 00:21:04.839 "delay_cmd_submit": true, 00:21:04.839 "dhchap_dhgroups": [ 00:21:04.839 "null", 00:21:04.839 "ffdhe2048", 00:21:04.839 "ffdhe3072", 00:21:04.839 "ffdhe4096", 00:21:04.839 "ffdhe6144", 00:21:04.839 "ffdhe8192" 00:21:04.839 ], 00:21:04.839 "dhchap_digests": [ 00:21:04.839 "sha256", 00:21:04.839 "sha384", 00:21:04.839 "sha512" 00:21:04.839 ], 00:21:04.839 "disable_auto_failback": false, 00:21:04.839 "fast_io_fail_timeout_sec": 0, 00:21:04.839 "generate_uuids": false, 00:21:04.839 "high_priority_weight": 0, 00:21:04.839 "io_path_stat": false, 00:21:04.839 "io_queue_requests": 0, 00:21:04.839 "keep_alive_timeout_ms": 10000, 00:21:04.839 "low_priority_weight": 0, 00:21:04.839 "medium_priority_weight": 0, 00:21:04.839 "nvme_adminq_poll_period_us": 10000, 00:21:04.839 "nvme_error_stat": false, 00:21:04.839 "nvme_ioq_poll_period_us": 0, 00:21:04.839 "rdma_cm_event_timeout_ms": 0, 00:21:04.839 "rdma_max_cq_size": 0, 00:21:04.839 "rdma_srq_size": 0, 00:21:04.839 "reconnect_delay_sec": 0, 00:21:04.839 "timeout_admin_us": 0, 00:21:04.839 "timeout_us": 0, 00:21:04.839 "transport_ack_timeout": 0, 00:21:04.839 "transport_retry_count": 4, 00:21:04.839 "transport_tos": 0 00:21:04.839 } 00:21:04.839 }, 00:21:04.839 { 00:21:04.839 "method": "bdev_nvme_set_hotplug", 00:21:04.839 "params": { 00:21:04.839 "enable": false, 00:21:04.839 "period_us": 100000 00:21:04.839 } 00:21:04.839 }, 00:21:04.839 { 00:21:04.840 "method": "bdev_malloc_create", 00:21:04.840 "params": { 00:21:04.840 "block_size": 4096, 00:21:04.840 "dif_is_head_of_md": false, 00:21:04.840 "dif_pi_format": 0, 00:21:04.840 "dif_type": 0, 00:21:04.840 "md_size": 0, 00:21:04.840 "name": "malloc0", 00:21:04.840 "num_blocks": 8192, 00:21:04.840 "optimal_io_boundary": 0, 00:21:04.840 "physical_block_size": 4096, 00:21:04.840 "uuid": "13b63f45-62ba-4929-8604-dfcd10d2d41b" 00:21:04.840 } 00:21:04.840 }, 00:21:04.840 { 00:21:04.840 "method": "bdev_wait_for_examine" 00:21:04.840 } 00:21:04.840 ] 00:21:04.840 }, 00:21:04.840 { 00:21:04.840 "subsystem": "nbd", 00:21:04.840 "config": [] 00:21:04.840 }, 00:21:04.840 { 00:21:04.840 "subsystem": "scheduler", 00:21:04.840 "config": [ 00:21:04.840 { 00:21:04.840 "method": "framework_set_scheduler", 00:21:04.840 "params": { 00:21:04.840 "name": "static" 00:21:04.840 } 00:21:04.840 } 00:21:04.840 ] 00:21:04.840 }, 00:21:04.840 { 00:21:04.840 "subsystem": "nvmf", 00:21:04.840 "config": [ 00:21:04.840 { 00:21:04.840 "method": "nvmf_set_config", 00:21:04.840 "params": { 00:21:04.840 "admin_cmd_passthru": { 00:21:04.840 "identify_ctrlr": false 00:21:04.840 }, 00:21:04.840 "dhchap_dhgroups": [ 00:21:04.840 "null", 00:21:04.840 "ffdhe2048", 00:21:04.840 "ffdhe3072", 00:21:04.840 "ffdhe4096", 00:21:04.840 "ffdhe6144", 00:21:04.840 "ffdhe8192" 00:21:04.840 ], 00:21:04.840 "dhchap_digests": [ 00:21:04.840 "sha256", 00:21:04.840 "sha384", 00:21:04.840 "sha512" 00:21:04.840 ], 00:21:04.840 "discovery_filter": "match_any" 00:21:04.840 } 00:21:04.840 }, 00:21:04.840 { 00:21:04.840 "method": "nvmf_set_max_subsystems", 00:21:04.840 "params": { 00:21:04.840 "max_subsystems": 1024 00:21:04.840 } 00:21:04.840 }, 00:21:04.840 { 00:21:04.840 "method": "nvmf_set_crdt", 00:21:04.840 "params": { 00:21:04.840 "crdt1": 0, 00:21:04.840 "crdt2": 0, 00:21:04.840 "crdt3": 0 00:21:04.840 } 00:21:04.840 }, 00:21:04.840 { 00:21:04.840 "method": "nvmf_create_transport", 00:21:04.840 "params": { 00:21:04.840 "abort_timeout_sec": 1, 00:21:04.840 "ack_timeout": 0, 00:21:04.840 "buf_cache_size": 4294967295, 00:21:04.840 "c2h_success": false, 00:21:04.840 "data_wr_pool_size": 0, 00:21:04.840 "dif_insert_or_strip": false, 00:21:04.840 "in_capsule_data_size": 4096, 00:21:04.840 "io_unit_size": 131072, 00:21:04.840 "max_aq_depth": 128, 00:21:04.840 "max_io_qpairs_per_ctrlr": 127, 00:21:04.840 "max_io_size": 131072, 00:21:04.840 "max_queue_depth": 128, 00:21:04.840 "num_shared_buffers": 511, 00:21:04.840 "sock_priority": 0, 00:21:04.840 "trtype": "TCP", 00:21:04.840 "zcopy": false 00:21:04.840 } 00:21:04.840 }, 00:21:04.840 { 00:21:04.840 "method": "nvmf_create_subsystem", 00:21:04.840 "params": { 00:21:04.840 "allow_any_host": false, 00:21:04.840 "ana_reporting": false, 00:21:04.840 "max_cntlid": 65519, 00:21:04.840 "max_namespaces": 10, 00:21:04.840 "min_cntlid": 1, 00:21:04.840 "model_number": "SPDK bdev Controller", 00:21:04.840 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:04.840 "serial_number": "SPDK00000000000001" 00:21:04.840 } 00:21:04.840 }, 00:21:04.840 { 00:21:04.840 "method": "nvmf_subsystem_add_host", 00:21:04.840 "params": { 00:21:04.840 "host": "nqn.2016-06.io.spdk:host1", 00:21:04.840 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:04.840 "psk": "key0" 00:21:04.840 } 00:21:04.840 }, 00:21:04.840 { 00:21:04.840 "method": "nvmf_subsystem_add_ns", 00:21:04.840 "params": { 00:21:04.840 "namespace": { 00:21:04.840 "bdev_name": "malloc0", 00:21:04.840 "nguid": "13B63F4562BA49298604DFCD10D2D41B", 00:21:04.840 "no_auto_visible": false, 00:21:04.840 "nsid": 1, 00:21:04.840 "uuid": "13b63f45-62ba-4929-8604-dfcd10d2d41b" 00:21:04.840 }, 00:21:04.840 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:21:04.840 } 00:21:04.840 }, 00:21:04.840 { 00:21:04.840 "method": "nvmf_subsystem_add_listener", 00:21:04.840 "params": { 00:21:04.840 "listen_address": { 00:21:04.840 "adrfam": "IPv4", 00:21:04.840 "traddr": "10.0.0.3", 00:21:04.840 "trsvcid": "4420", 00:21:04.840 "trtype": "TCP" 00:21:04.840 }, 00:21:04.840 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:04.840 "secure_channel": true 00:21:04.840 } 00:21:04.840 } 00:21:04.840 ] 00:21:04.840 } 00:21:04.840 ] 00:21:04.840 }' 00:21:04.840 14:25:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:05.103 14:25:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:21:05.103 "subsystems": [ 00:21:05.103 { 00:21:05.103 "subsystem": "keyring", 00:21:05.103 "config": [ 00:21:05.103 { 00:21:05.103 "method": "keyring_file_add_key", 00:21:05.103 "params": { 00:21:05.103 "name": "key0", 00:21:05.103 "path": "/tmp/tmp.CilNb6IJmL" 00:21:05.103 } 00:21:05.103 } 00:21:05.103 ] 00:21:05.103 }, 00:21:05.103 { 00:21:05.103 "subsystem": "iobuf", 00:21:05.103 "config": [ 00:21:05.103 { 00:21:05.103 "method": "iobuf_set_options", 00:21:05.103 "params": { 00:21:05.103 "enable_numa": false, 00:21:05.103 "large_bufsize": 135168, 00:21:05.103 "large_pool_count": 1024, 00:21:05.103 "small_bufsize": 8192, 00:21:05.103 "small_pool_count": 8192 00:21:05.103 } 00:21:05.103 } 00:21:05.103 ] 00:21:05.103 }, 00:21:05.103 { 00:21:05.103 "subsystem": "sock", 00:21:05.103 "config": [ 00:21:05.103 { 00:21:05.103 "method": "sock_set_default_impl", 00:21:05.103 "params": { 00:21:05.103 "impl_name": "posix" 00:21:05.103 } 00:21:05.103 }, 00:21:05.103 { 00:21:05.103 "method": "sock_impl_set_options", 00:21:05.103 "params": { 00:21:05.103 "enable_ktls": false, 00:21:05.103 "enable_placement_id": 0, 00:21:05.103 "enable_quickack": false, 00:21:05.103 "enable_recv_pipe": true, 00:21:05.103 "enable_zerocopy_send_client": false, 00:21:05.103 "enable_zerocopy_send_server": true, 00:21:05.103 "impl_name": "ssl", 00:21:05.103 "recv_buf_size": 4096, 00:21:05.103 "send_buf_size": 4096, 00:21:05.103 "tls_version": 0, 00:21:05.103 "zerocopy_threshold": 0 00:21:05.104 } 00:21:05.104 }, 00:21:05.104 { 00:21:05.104 "method": "sock_impl_set_options", 00:21:05.104 "params": { 00:21:05.104 "enable_ktls": false, 00:21:05.104 "enable_placement_id": 0, 00:21:05.104 "enable_quickack": false, 00:21:05.104 "enable_recv_pipe": true, 00:21:05.104 "enable_zerocopy_send_client": false, 00:21:05.104 "enable_zerocopy_send_server": true, 00:21:05.104 "impl_name": "posix", 00:21:05.104 "recv_buf_size": 2097152, 00:21:05.104 "send_buf_size": 2097152, 00:21:05.104 "tls_version": 0, 00:21:05.104 "zerocopy_threshold": 0 00:21:05.104 } 00:21:05.104 } 00:21:05.104 ] 00:21:05.104 }, 00:21:05.104 { 00:21:05.104 "subsystem": "vmd", 00:21:05.104 "config": [] 00:21:05.104 }, 00:21:05.104 { 00:21:05.104 "subsystem": "accel", 00:21:05.104 "config": [ 00:21:05.104 { 00:21:05.104 "method": "accel_set_options", 00:21:05.104 "params": { 00:21:05.104 "buf_count": 2048, 00:21:05.104 "large_cache_size": 16, 00:21:05.104 "sequence_count": 2048, 00:21:05.104 "small_cache_size": 128, 00:21:05.104 "task_count": 2048 00:21:05.104 } 00:21:05.104 } 00:21:05.104 ] 00:21:05.104 }, 00:21:05.104 { 00:21:05.104 "subsystem": "bdev", 00:21:05.104 "config": [ 00:21:05.104 { 00:21:05.104 "method": "bdev_set_options", 00:21:05.104 "params": { 00:21:05.104 "bdev_auto_examine": true, 00:21:05.104 "bdev_io_cache_size": 256, 00:21:05.104 "bdev_io_pool_size": 65535, 00:21:05.104 "iobuf_large_cache_size": 16, 00:21:05.104 "iobuf_small_cache_size": 128 00:21:05.104 } 00:21:05.104 }, 00:21:05.104 { 00:21:05.104 "method": "bdev_raid_set_options", 00:21:05.104 "params": { 00:21:05.104 "process_max_bandwidth_mb_sec": 0, 00:21:05.104 "process_window_size_kb": 1024 00:21:05.104 } 00:21:05.104 }, 00:21:05.104 { 00:21:05.104 "method": "bdev_iscsi_set_options", 00:21:05.104 "params": { 00:21:05.104 "timeout_sec": 30 00:21:05.104 } 00:21:05.104 }, 00:21:05.104 { 00:21:05.104 "method": "bdev_nvme_set_options", 00:21:05.104 "params": { 00:21:05.104 "action_on_timeout": "none", 00:21:05.104 "allow_accel_sequence": false, 00:21:05.104 "arbitration_burst": 0, 00:21:05.104 "bdev_retry_count": 3, 00:21:05.104 "ctrlr_loss_timeout_sec": 0, 00:21:05.104 "delay_cmd_submit": true, 00:21:05.104 "dhchap_dhgroups": [ 00:21:05.104 "null", 00:21:05.104 "ffdhe2048", 00:21:05.104 "ffdhe3072", 00:21:05.104 "ffdhe4096", 00:21:05.104 "ffdhe6144", 00:21:05.104 "ffdhe8192" 00:21:05.104 ], 00:21:05.104 "dhchap_digests": [ 00:21:05.104 "sha256", 00:21:05.104 "sha384", 00:21:05.104 "sha512" 00:21:05.104 ], 00:21:05.104 "disable_auto_failback": false, 00:21:05.104 "fast_io_fail_timeout_sec": 0, 00:21:05.104 "generate_uuids": false, 00:21:05.104 "high_priority_weight": 0, 00:21:05.104 "io_path_stat": false, 00:21:05.104 "io_queue_requests": 512, 00:21:05.104 "keep_alive_timeout_ms": 10000, 00:21:05.104 "low_priority_weight": 0, 00:21:05.104 "medium_priority_weight": 0, 00:21:05.104 "nvme_adminq_poll_period_us": 10000, 00:21:05.104 "nvme_error_stat": false, 00:21:05.104 "nvme_ioq_poll_period_us": 0, 00:21:05.104 "rdma_cm_event_timeout_ms": 0, 00:21:05.104 "rdma_max_cq_size": 0, 00:21:05.104 "rdma_srq_size": 0, 00:21:05.104 "reconnect_delay_sec": 0, 00:21:05.104 "timeout_admin_us": 0, 00:21:05.104 "timeout_us": 0, 00:21:05.104 "transport_ack_timeout": 0, 00:21:05.104 "transport_retry_count": 4, 00:21:05.104 "transport_tos": 0 00:21:05.104 } 00:21:05.104 }, 00:21:05.104 { 00:21:05.104 "method": "bdev_nvme_attach_controller", 00:21:05.104 "params": { 00:21:05.104 "adrfam": "IPv4", 00:21:05.104 "ctrlr_loss_timeout_sec": 0, 00:21:05.104 "ddgst": false, 00:21:05.104 "fast_io_fail_timeout_sec": 0, 00:21:05.104 "hdgst": false, 00:21:05.104 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:05.104 "multipath": "multipath", 00:21:05.104 "name": "TLSTEST", 00:21:05.104 "prchk_guard": false, 00:21:05.104 "prchk_reftag": false, 00:21:05.104 "psk": "key0", 00:21:05.104 "reconnect_delay_sec": 0, 00:21:05.104 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:05.104 "traddr": "10.0.0.3", 00:21:05.104 "trsvcid": "4420", 00:21:05.104 "trtype": "TCP" 00:21:05.104 } 00:21:05.104 }, 00:21:05.104 { 00:21:05.104 "method": "bdev_nvme_set_hotplug", 00:21:05.104 "params": { 00:21:05.104 "enable": false, 00:21:05.104 "period_us": 100000 00:21:05.104 } 00:21:05.104 }, 00:21:05.104 { 00:21:05.104 "method": "bdev_wait_for_examine" 00:21:05.104 } 00:21:05.104 ] 00:21:05.104 }, 00:21:05.104 { 00:21:05.104 "subsystem": "nbd", 00:21:05.104 "config": [] 00:21:05.104 } 00:21:05.104 ] 00:21:05.104 }' 00:21:05.104 14:25:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 89295 00:21:05.104 14:25:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 89295 ']' 00:21:05.104 14:25:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 89295 00:21:05.104 14:25:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:21:05.104 14:25:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:05.104 14:25:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 89295 00:21:05.104 14:25:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:21:05.104 14:25:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:21:05.104 killing process with pid 89295 00:21:05.104 14:25:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 89295' 00:21:05.104 14:25:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 89295 00:21:05.104 14:25:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 89295 00:21:05.104 Received shutdown signal, test time was about 10.000000 seconds 00:21:05.104 00:21:05.104 Latency(us) 00:21:05.104 [2024-11-06T14:25:32.739Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:05.104 [2024-11-06T14:25:32.739Z] =================================================================================================================== 00:21:05.104 [2024-11-06T14:25:32.739Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:05.669 14:25:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 89191 00:21:05.669 14:25:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 89191 ']' 00:21:05.669 14:25:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 89191 00:21:05.669 14:25:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:21:05.669 14:25:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:05.669 14:25:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 89191 00:21:05.926 14:25:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:21:05.927 killing process with pid 89191 00:21:05.927 14:25:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:21:05.927 14:25:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 89191' 00:21:05.927 14:25:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 89191 00:21:05.927 14:25:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 89191 00:21:06.521 14:25:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:21:06.521 14:25:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:06.521 14:25:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:06.521 14:25:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:06.521 14:25:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:21:06.521 "subsystems": [ 00:21:06.521 { 00:21:06.521 "subsystem": "keyring", 00:21:06.521 "config": [ 00:21:06.521 { 00:21:06.521 "method": "keyring_file_add_key", 00:21:06.521 "params": { 00:21:06.521 "name": "key0", 00:21:06.521 "path": "/tmp/tmp.CilNb6IJmL" 00:21:06.521 } 00:21:06.521 } 00:21:06.521 ] 00:21:06.521 }, 00:21:06.521 { 00:21:06.521 "subsystem": "iobuf", 00:21:06.521 "config": [ 00:21:06.521 { 00:21:06.521 "method": "iobuf_set_options", 00:21:06.521 "params": { 00:21:06.521 "enable_numa": false, 00:21:06.521 "large_bufsize": 135168, 00:21:06.521 "large_pool_count": 1024, 00:21:06.521 "small_bufsize": 8192, 00:21:06.521 "small_pool_count": 8192 00:21:06.521 } 00:21:06.521 } 00:21:06.521 ] 00:21:06.521 }, 00:21:06.521 { 00:21:06.521 "subsystem": "sock", 00:21:06.521 "config": [ 00:21:06.521 { 00:21:06.521 "method": "sock_set_default_impl", 00:21:06.521 "params": { 00:21:06.521 "impl_name": "posix" 00:21:06.521 } 00:21:06.521 }, 00:21:06.521 { 00:21:06.521 "method": "sock_impl_set_options", 00:21:06.521 "params": { 00:21:06.521 "enable_ktls": false, 00:21:06.521 "enable_placement_id": 0, 00:21:06.521 "enable_quickack": false, 00:21:06.521 "enable_recv_pipe": true, 00:21:06.521 "enable_zerocopy_send_client": false, 00:21:06.521 "enable_zerocopy_send_server": true, 00:21:06.521 "impl_name": "ssl", 00:21:06.521 "recv_buf_size": 4096, 00:21:06.521 "send_buf_size": 4096, 00:21:06.521 "tls_version": 0, 00:21:06.521 "zerocopy_threshold": 0 00:21:06.521 } 00:21:06.521 }, 00:21:06.521 { 00:21:06.521 "method": "sock_impl_set_options", 00:21:06.521 "params": { 00:21:06.521 "enable_ktls": false, 00:21:06.521 "enable_placement_id": 0, 00:21:06.521 "enable_quickack": false, 00:21:06.521 "enable_recv_pipe": true, 00:21:06.521 "enable_zerocopy_send_client": false, 00:21:06.521 "enable_zerocopy_send_server": true, 00:21:06.521 "impl_name": "posix", 00:21:06.521 "recv_buf_size": 2097152, 00:21:06.521 "send_buf_size": 2097152, 00:21:06.521 "tls_version": 0, 00:21:06.521 "zerocopy_threshold": 0 00:21:06.521 } 00:21:06.521 } 00:21:06.521 ] 00:21:06.521 }, 00:21:06.521 { 00:21:06.521 "subsystem": "vmd", 00:21:06.521 "config": [] 00:21:06.521 }, 00:21:06.521 { 00:21:06.521 "subsystem": "accel", 00:21:06.521 "config": [ 00:21:06.521 { 00:21:06.521 "method": "accel_set_options", 00:21:06.521 "params": { 00:21:06.521 "buf_count": 2048, 00:21:06.521 "large_cache_size": 16, 00:21:06.521 "sequence_count": 2048, 00:21:06.521 "small_cache_size": 128, 00:21:06.521 "task_count": 2048 00:21:06.521 } 00:21:06.521 } 00:21:06.521 ] 00:21:06.521 }, 00:21:06.521 { 00:21:06.521 "subsystem": "bdev", 00:21:06.521 "config": [ 00:21:06.521 { 00:21:06.521 "method": "bdev_set_options", 00:21:06.521 "params": { 00:21:06.521 "bdev_auto_examine": true, 00:21:06.521 "bdev_io_cache_size": 256, 00:21:06.521 "bdev_io_pool_size": 65535, 00:21:06.521 "iobuf_large_cache_size": 16, 00:21:06.521 "iobuf_small_cache_size": 128 00:21:06.521 } 00:21:06.521 }, 00:21:06.521 { 00:21:06.521 "method": "bdev_raid_set_options", 00:21:06.521 "params": { 00:21:06.521 "process_max_bandwidth_mb_sec": 0, 00:21:06.521 "process_window_size_kb": 1024 00:21:06.521 } 00:21:06.521 }, 00:21:06.521 { 00:21:06.521 "method": "bdev_iscsi_set_options", 00:21:06.521 "params": { 00:21:06.521 "timeout_sec": 30 00:21:06.521 } 00:21:06.521 }, 00:21:06.521 { 00:21:06.521 "method": "bdev_nvme_set_options", 00:21:06.521 "params": { 00:21:06.521 "action_on_timeout": "none", 00:21:06.521 "allow_accel_sequence": false, 00:21:06.521 "arbitration_burst": 0, 00:21:06.521 "bdev_retry_count": 3, 00:21:06.521 "ctrlr_loss_timeout_sec": 0, 00:21:06.521 "delay_cmd_submit": true, 00:21:06.521 "dhchap_dhgroups": [ 00:21:06.521 "null", 00:21:06.521 "ffdhe2048", 00:21:06.521 "ffdhe3072", 00:21:06.521 "ffdhe4096", 00:21:06.521 "ffdhe6144", 00:21:06.521 "ffdhe8192" 00:21:06.521 ], 00:21:06.521 "dhchap_digests": [ 00:21:06.521 "sha256", 00:21:06.521 "sha384", 00:21:06.521 "sha512" 00:21:06.521 ], 00:21:06.521 "disable_auto_failback": false, 00:21:06.521 "fast_io_fail_timeout_sec": 0, 00:21:06.521 "generate_uuids": false, 00:21:06.521 "high_priority_weight": 0, 00:21:06.521 "io_path_stat": false, 00:21:06.521 "io_queue_requests": 0, 00:21:06.521 "keep_alive_timeout_ms": 10000, 00:21:06.521 "low_priority_weight": 0, 00:21:06.521 "medium_priority_weight": 0, 00:21:06.521 "nvme_adminq_poll_period_us": 10000, 00:21:06.521 "nvme_error_stat": false, 00:21:06.521 "nvme_ioq_poll_period_us": 0, 00:21:06.521 "rdma_cm_event_timeout_ms": 0, 00:21:06.521 "rdma_max_cq_size": 0, 00:21:06.521 "rdma_srq_size": 0, 00:21:06.521 "reconnect_delay_sec": 0, 00:21:06.521 "timeout_admin_us": 0, 00:21:06.521 "timeout_us": 0, 00:21:06.521 "transport_ack_timeout": 0, 00:21:06.521 "transport_retry_count": 4, 00:21:06.521 "transport_tos": 0 00:21:06.521 } 00:21:06.521 }, 00:21:06.521 { 00:21:06.521 "method": "bdev_nvme_set_hotplug", 00:21:06.521 "params": { 00:21:06.521 "enable": false, 00:21:06.521 "period_us": 100000 00:21:06.521 } 00:21:06.521 }, 00:21:06.521 { 00:21:06.521 "method": "bdev_malloc_create", 00:21:06.521 "params": { 00:21:06.521 "block_size": 4096, 00:21:06.521 "dif_is_head_of_md": false, 00:21:06.521 "dif_pi_format": 0, 00:21:06.521 "dif_type": 0, 00:21:06.521 "md_size": 0, 00:21:06.521 "name": "malloc0", 00:21:06.521 "num_blocks": 8192, 00:21:06.521 "optimal_io_boundary": 0, 00:21:06.521 "physical_block_size": 4096, 00:21:06.521 "uuid": "13b63f45-62ba-4929-8604-dfcd10d2d41b" 00:21:06.521 } 00:21:06.521 }, 00:21:06.521 { 00:21:06.521 "method": "bdev_wait_for_examine" 00:21:06.521 } 00:21:06.521 ] 00:21:06.521 }, 00:21:06.521 { 00:21:06.521 "subsystem": "nbd", 00:21:06.521 "config": [] 00:21:06.521 }, 00:21:06.521 { 00:21:06.521 "subsystem": "scheduler", 00:21:06.522 "config": [ 00:21:06.522 { 00:21:06.522 "method": "framework_set_scheduler", 00:21:06.522 "params": { 00:21:06.522 "name": "static" 00:21:06.522 } 00:21:06.522 } 00:21:06.522 ] 00:21:06.522 }, 00:21:06.522 { 00:21:06.522 "subsystem": "nvmf", 00:21:06.522 "config": [ 00:21:06.522 { 00:21:06.522 "method": "nvmf_set_config", 00:21:06.522 "params": { 00:21:06.522 "admin_cmd_passthru": { 00:21:06.522 "identify_ctrlr": false 00:21:06.522 }, 00:21:06.522 "dhchap_dhgroups": [ 00:21:06.522 "null", 00:21:06.522 "ffdhe2048", 00:21:06.522 "ffdhe3072", 00:21:06.522 "ffdhe4096", 00:21:06.522 "ffdhe6144", 00:21:06.522 "ffdhe8192" 00:21:06.522 ], 00:21:06.522 "dhchap_digests": [ 00:21:06.522 "sha256", 00:21:06.522 "sha384", 00:21:06.522 "sha512" 00:21:06.522 ], 00:21:06.522 "discovery_filter": "match_any" 00:21:06.522 } 00:21:06.522 }, 00:21:06.522 { 00:21:06.522 "method": "nvmf_set_max_subsystems", 00:21:06.522 "params": { 00:21:06.522 "max_subsystems": 1024 00:21:06.522 } 00:21:06.522 }, 00:21:06.522 { 00:21:06.522 "method": "nvmf_set_crdt", 00:21:06.522 "params": { 00:21:06.522 "crdt1": 0, 00:21:06.522 "crdt2": 0, 00:21:06.522 "crdt3": 0 00:21:06.522 } 00:21:06.522 }, 00:21:06.522 { 00:21:06.522 "method": "nvmf_create_transport", 00:21:06.522 "params": { 00:21:06.522 "abort_timeout_sec": 1, 00:21:06.522 "ack_timeout": 0, 00:21:06.522 "buf_cache_size": 4294967295, 00:21:06.522 "c2h_success": false, 00:21:06.522 "data_wr_pool_size": 0, 00:21:06.522 "dif_insert_or_strip": false, 00:21:06.522 "in_capsule_data_size": 4096, 00:21:06.522 "io_unit_size": 131072, 00:21:06.522 "max_aq_depth": 128, 00:21:06.522 "max_io_qpairs_per_ctrlr": 127, 00:21:06.522 "max_io_size": 131072, 00:21:06.522 "max_queue_depth": 128, 00:21:06.522 "num_shared_buffers": 511, 00:21:06.522 "sock_priority": 0, 00:21:06.522 "trtype": "TCP", 00:21:06.522 "zcopy": false 00:21:06.522 } 00:21:06.522 }, 00:21:06.522 { 00:21:06.522 "method": "nvmf_create_subsystem", 00:21:06.522 "params": { 00:21:06.522 "allow_any_host": false, 00:21:06.522 "ana_reporting": false, 00:21:06.522 "max_cntlid": 65519, 00:21:06.522 "max_namespaces": 10, 00:21:06.522 "min_cntlid": 1, 00:21:06.522 "model_number": "SPDK bdev Controller", 00:21:06.522 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:06.522 "serial_number": "SPDK00000000000001" 00:21:06.522 } 00:21:06.522 }, 00:21:06.522 { 00:21:06.522 "method": "nvmf_subsystem_add_host", 00:21:06.522 "params": { 00:21:06.522 "host": "nqn.2016-06.io.spdk:host1", 00:21:06.522 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:06.522 "psk": "key0" 00:21:06.522 } 00:21:06.522 }, 00:21:06.522 { 00:21:06.522 "method": "nvmf_subsystem_add_ns", 00:21:06.522 "params": { 00:21:06.522 "namespace": { 00:21:06.522 "bdev_name": "malloc0", 00:21:06.522 "nguid": "13B63F4562BA49298604DFCD10D2D41B", 00:21:06.522 "no_auto_visible": false, 00:21:06.522 "nsid": 1, 00:21:06.522 "uuid": "13b63f45-62ba-4929-8604-dfcd10d2d41b" 00:21:06.522 }, 00:21:06.522 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:21:06.522 } 00:21:06.522 }, 00:21:06.522 { 00:21:06.522 "method": "nvmf_subsystem_add_listener", 00:21:06.522 "params": { 00:21:06.522 "listen_address": { 00:21:06.522 "adrfam": "IPv4", 00:21:06.522 "traddr": "10.0.0.3", 00:21:06.522 "trsvcid": "4420", 00:21:06.522 "trtype": "TCP" 00:21:06.522 }, 00:21:06.522 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:06.522 "secure_channel": true 00:21:06.522 } 00:21:06.522 } 00:21:06.522 ] 00:21:06.522 } 00:21:06.522 ] 00:21:06.522 }' 00:21:06.522 14:25:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=89395 00:21:06.522 14:25:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 89395 00:21:06.522 14:25:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 89395 ']' 00:21:06.522 14:25:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:06.522 14:25:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:06.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:06.522 14:25:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:06.522 14:25:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:06.522 14:25:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:06.522 14:25:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:21:06.522 [2024-11-06 14:25:34.010730] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:21:06.522 [2024-11-06 14:25:34.010844] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:06.780 [2024-11-06 14:25:34.165728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:06.780 [2024-11-06 14:25:34.243067] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:06.780 [2024-11-06 14:25:34.243109] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:06.780 [2024-11-06 14:25:34.243119] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:06.780 [2024-11-06 14:25:34.243131] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:06.780 [2024-11-06 14:25:34.243138] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:06.780 [2024-11-06 14:25:34.243850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:07.039 [2024-11-06 14:25:34.571181] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:07.039 [2024-11-06 14:25:34.603133] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:07.039 [2024-11-06 14:25:34.603302] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:07.297 14:25:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:07.297 14:25:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:21:07.297 14:25:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:07.297 14:25:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:07.297 14:25:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:07.297 14:25:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:07.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:07.297 14:25:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=89433 00:21:07.297 14:25:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 89433 /var/tmp/bdevperf.sock 00:21:07.297 14:25:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 89433 ']' 00:21:07.297 14:25:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:07.297 14:25:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:07.297 14:25:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:07.297 14:25:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:07.297 14:25:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:07.297 14:25:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:21:07.297 14:25:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:21:07.297 "subsystems": [ 00:21:07.297 { 00:21:07.297 "subsystem": "keyring", 00:21:07.297 "config": [ 00:21:07.297 { 00:21:07.298 "method": "keyring_file_add_key", 00:21:07.298 "params": { 00:21:07.298 "name": "key0", 00:21:07.298 "path": "/tmp/tmp.CilNb6IJmL" 00:21:07.298 } 00:21:07.298 } 00:21:07.298 ] 00:21:07.298 }, 00:21:07.298 { 00:21:07.298 "subsystem": "iobuf", 00:21:07.298 "config": [ 00:21:07.298 { 00:21:07.298 "method": "iobuf_set_options", 00:21:07.298 "params": { 00:21:07.298 "enable_numa": false, 00:21:07.298 "large_bufsize": 135168, 00:21:07.298 "large_pool_count": 1024, 00:21:07.298 "small_bufsize": 8192, 00:21:07.298 "small_pool_count": 8192 00:21:07.298 } 00:21:07.298 } 00:21:07.298 ] 00:21:07.298 }, 00:21:07.298 { 00:21:07.298 "subsystem": "sock", 00:21:07.298 "config": [ 00:21:07.298 { 00:21:07.298 "method": "sock_set_default_impl", 00:21:07.298 "params": { 00:21:07.298 "impl_name": "posix" 00:21:07.298 } 00:21:07.298 }, 00:21:07.298 { 00:21:07.298 "method": "sock_impl_set_options", 00:21:07.298 "params": { 00:21:07.298 "enable_ktls": false, 00:21:07.298 "enable_placement_id": 0, 00:21:07.298 "enable_quickack": false, 00:21:07.298 "enable_recv_pipe": true, 00:21:07.298 "enable_zerocopy_send_client": false, 00:21:07.298 "enable_zerocopy_send_server": true, 00:21:07.298 "impl_name": "ssl", 00:21:07.298 "recv_buf_size": 4096, 00:21:07.298 "send_buf_size": 4096, 00:21:07.298 "tls_version": 0, 00:21:07.298 "zerocopy_threshold": 0 00:21:07.298 } 00:21:07.298 }, 00:21:07.298 { 00:21:07.298 "method": "sock_impl_set_options", 00:21:07.298 "params": { 00:21:07.298 "enable_ktls": false, 00:21:07.298 "enable_placement_id": 0, 00:21:07.298 "enable_quickack": false, 00:21:07.298 "enable_recv_pipe": true, 00:21:07.298 "enable_zerocopy_send_client": false, 00:21:07.298 "enable_zerocopy_send_server": true, 00:21:07.298 "impl_name": "posix", 00:21:07.298 "recv_buf_size": 2097152, 00:21:07.298 "send_buf_size": 2097152, 00:21:07.298 "tls_version": 0, 00:21:07.298 "zerocopy_threshold": 0 00:21:07.298 } 00:21:07.298 } 00:21:07.298 ] 00:21:07.298 }, 00:21:07.298 { 00:21:07.298 "subsystem": "vmd", 00:21:07.298 "config": [] 00:21:07.298 }, 00:21:07.298 { 00:21:07.298 "subsystem": "accel", 00:21:07.298 "config": [ 00:21:07.298 { 00:21:07.298 "method": "accel_set_options", 00:21:07.298 "params": { 00:21:07.298 "buf_count": 2048, 00:21:07.298 "large_cache_size": 16, 00:21:07.298 "sequence_count": 2048, 00:21:07.298 "small_cache_size": 128, 00:21:07.298 "task_count": 2048 00:21:07.298 } 00:21:07.298 } 00:21:07.298 ] 00:21:07.298 }, 00:21:07.298 { 00:21:07.298 "subsystem": "bdev", 00:21:07.298 "config": [ 00:21:07.298 { 00:21:07.298 "method": "bdev_set_options", 00:21:07.298 "params": { 00:21:07.298 "bdev_auto_examine": true, 00:21:07.298 "bdev_io_cache_size": 256, 00:21:07.298 "bdev_io_pool_size": 65535, 00:21:07.298 "iobuf_large_cache_size": 16, 00:21:07.298 "iobuf_small_cache_size": 128 00:21:07.298 } 00:21:07.298 }, 00:21:07.298 { 00:21:07.298 "method": "bdev_raid_set_options", 00:21:07.298 "params": { 00:21:07.298 "process_max_bandwidth_mb_sec": 0, 00:21:07.298 "process_window_size_kb": 1024 00:21:07.298 } 00:21:07.298 }, 00:21:07.298 { 00:21:07.298 "method": "bdev_iscsi_set_options", 00:21:07.298 "params": { 00:21:07.298 "timeout_sec": 30 00:21:07.298 } 00:21:07.298 }, 00:21:07.298 { 00:21:07.298 "method": "bdev_nvme_set_options", 00:21:07.298 "params": { 00:21:07.298 "action_on_timeout": "none", 00:21:07.298 "allow_accel_sequence": false, 00:21:07.298 "arbitration_burst": 0, 00:21:07.298 "bdev_retry_count": 3, 00:21:07.298 "ctrlr_loss_timeout_sec": 0, 00:21:07.298 "delay_cmd_submit": true, 00:21:07.298 "dhchap_dhgroups": [ 00:21:07.298 "null", 00:21:07.298 "ffdhe2048", 00:21:07.298 "ffdhe3072", 00:21:07.298 "ffdhe4096", 00:21:07.298 "ffdhe6144", 00:21:07.298 "ffdhe8192" 00:21:07.298 ], 00:21:07.298 "dhchap_digests": [ 00:21:07.298 "sha256", 00:21:07.298 "sha384", 00:21:07.298 "sha512" 00:21:07.298 ], 00:21:07.298 "disable_auto_failback": false, 00:21:07.298 "fast_io_fail_timeout_sec": 0, 00:21:07.298 "generate_uuids": false, 00:21:07.298 "high_priority_weight": 0, 00:21:07.298 "io_path_stat": false, 00:21:07.298 "io_queue_requests": 512, 00:21:07.298 "keep_alive_timeout_ms": 10000, 00:21:07.298 "low_priority_weight": 0, 00:21:07.298 "medium_priority_weight": 0, 00:21:07.298 "nvme_adminq_poll_period_us": 10000, 00:21:07.298 "nvme_error_stat": false, 00:21:07.298 "nvme_ioq_poll_period_us": 0, 00:21:07.298 "rdma_cm_event_timeout_ms": 0, 00:21:07.298 "rdma_max_cq_size": 0, 00:21:07.298 "rdma_srq_size": 0, 00:21:07.298 "reconnect_delay_sec": 0, 00:21:07.298 "timeout_admin_us": 0, 00:21:07.298 "timeout_us": 0, 00:21:07.298 "transport_ack_timeout": 0, 00:21:07.298 "transport_retry_count": 4, 00:21:07.298 "transport_tos": 0 00:21:07.298 } 00:21:07.298 }, 00:21:07.298 { 00:21:07.298 "method": "bdev_nvme_attach_controller", 00:21:07.298 "params": { 00:21:07.298 "adrfam": "IPv4", 00:21:07.298 "ctrlr_loss_timeout_sec": 0, 00:21:07.298 "ddgst": false, 00:21:07.298 "fast_io_fail_timeout_sec": 0, 00:21:07.298 "hdgst": false, 00:21:07.298 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:07.298 "multipath": "multipath", 00:21:07.298 "name": "TLSTEST", 00:21:07.298 "prchk_guard": false, 00:21:07.298 "prchk_reftag": false, 00:21:07.298 "psk": "key0", 00:21:07.298 "reconnect_delay_sec": 0, 00:21:07.298 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:07.298 "traddr": "10.0.0.3", 00:21:07.298 "trsvcid": "4420", 00:21:07.298 "trtype": "TCP" 00:21:07.298 } 00:21:07.298 }, 00:21:07.298 { 00:21:07.298 "method": "bdev_nvme_set_hotplug", 00:21:07.298 "params": { 00:21:07.298 "enable": false, 00:21:07.298 "period_us": 100000 00:21:07.298 } 00:21:07.298 }, 00:21:07.298 { 00:21:07.298 "method": "bdev_wait_for_examine" 00:21:07.298 } 00:21:07.298 ] 00:21:07.298 }, 00:21:07.298 { 00:21:07.298 "subsystem": "nbd", 00:21:07.298 "config": [] 00:21:07.298 } 00:21:07.298 ] 00:21:07.298 }' 00:21:07.556 [2024-11-06 14:25:34.958486] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:21:07.556 [2024-11-06 14:25:34.958601] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89433 ] 00:21:07.556 [2024-11-06 14:25:35.115576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:07.813 [2024-11-06 14:25:35.212801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:08.071 [2024-11-06 14:25:35.511659] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:08.329 14:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:08.329 14:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:21:08.329 14:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:08.329 Running I/O for 10 seconds... 00:21:10.639 5460.00 IOPS, 21.33 MiB/s [2024-11-06T14:25:39.208Z] 5498.50 IOPS, 21.48 MiB/s [2024-11-06T14:25:40.146Z] 5511.00 IOPS, 21.53 MiB/s [2024-11-06T14:25:41.080Z] 5571.00 IOPS, 21.76 MiB/s [2024-11-06T14:25:42.035Z] 5555.20 IOPS, 21.70 MiB/s [2024-11-06T14:25:42.968Z] 5545.33 IOPS, 21.66 MiB/s [2024-11-06T14:25:43.901Z] 5575.29 IOPS, 21.78 MiB/s [2024-11-06T14:25:45.274Z] 5639.12 IOPS, 22.03 MiB/s [2024-11-06T14:25:46.210Z] 5630.56 IOPS, 21.99 MiB/s [2024-11-06T14:25:46.210Z] 5643.80 IOPS, 22.05 MiB/s 00:21:18.575 Latency(us) 00:21:18.575 [2024-11-06T14:25:46.210Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:18.575 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:18.575 Verification LBA range: start 0x0 length 0x2000 00:21:18.575 TLSTESTn1 : 10.02 5645.39 22.05 0.00 0.00 22626.90 5343.70 27021.00 00:21:18.575 [2024-11-06T14:25:46.210Z] =================================================================================================================== 00:21:18.575 [2024-11-06T14:25:46.210Z] Total : 5645.39 22.05 0.00 0.00 22626.90 5343.70 27021.00 00:21:18.575 { 00:21:18.575 "results": [ 00:21:18.575 { 00:21:18.575 "job": "TLSTESTn1", 00:21:18.575 "core_mask": "0x4", 00:21:18.575 "workload": "verify", 00:21:18.575 "status": "finished", 00:21:18.575 "verify_range": { 00:21:18.575 "start": 0, 00:21:18.575 "length": 8192 00:21:18.575 }, 00:21:18.575 "queue_depth": 128, 00:21:18.575 "io_size": 4096, 00:21:18.575 "runtime": 10.019859, 00:21:18.575 "iops": 5645.38882233772, 00:21:18.575 "mibps": 22.052300087256718, 00:21:18.575 "io_failed": 0, 00:21:18.575 "io_timeout": 0, 00:21:18.575 "avg_latency_us": 22626.902493533762, 00:21:18.575 "min_latency_us": 5343.704615384615, 00:21:18.575 "max_latency_us": 27020.996923076924 00:21:18.575 } 00:21:18.575 ], 00:21:18.575 "core_count": 1 00:21:18.575 } 00:21:18.575 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:18.575 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 89433 00:21:18.575 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 89433 ']' 00:21:18.575 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 89433 00:21:18.575 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:21:18.575 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:18.575 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 89433 00:21:18.575 killing process with pid 89433 00:21:18.575 Received shutdown signal, test time was about 10.000000 seconds 00:21:18.575 00:21:18.575 Latency(us) 00:21:18.575 [2024-11-06T14:25:46.210Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:18.575 [2024-11-06T14:25:46.210Z] =================================================================================================================== 00:21:18.575 [2024-11-06T14:25:46.210Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:18.575 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:21:18.575 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:21:18.575 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 89433' 00:21:18.575 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 89433 00:21:18.575 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 89433 00:21:19.142 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 89395 00:21:19.142 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 89395 ']' 00:21:19.142 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 89395 00:21:19.142 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:21:19.142 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:19.142 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 89395 00:21:19.142 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:21:19.142 killing process with pid 89395 00:21:19.142 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:21:19.142 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 89395' 00:21:19.142 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 89395 00:21:19.142 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 89395 00:21:20.076 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:21:20.076 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:20.076 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:20.076 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:20.076 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=89597 00:21:20.076 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 89597 00:21:20.076 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 89597 ']' 00:21:20.076 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:20.076 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:20.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:20.076 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:20.076 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:20.076 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:20.076 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:20.076 [2024-11-06 14:25:47.460269] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:21:20.076 [2024-11-06 14:25:47.460389] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:20.076 [2024-11-06 14:25:47.619642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:20.335 [2024-11-06 14:25:47.731366] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:20.335 [2024-11-06 14:25:47.731421] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:20.335 [2024-11-06 14:25:47.731433] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:20.335 [2024-11-06 14:25:47.731451] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:20.335 [2024-11-06 14:25:47.731462] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:20.335 [2024-11-06 14:25:47.732371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:20.594 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:20.594 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:21:20.594 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:20.594 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:20.594 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:20.852 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:20.852 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.CilNb6IJmL 00:21:20.852 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.CilNb6IJmL 00:21:20.852 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:20.852 [2024-11-06 14:25:48.437830] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:20.852 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:21.109 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:21:21.367 [2024-11-06 14:25:48.845967] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:21.367 [2024-11-06 14:25:48.846228] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:21.367 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:21.624 malloc0 00:21:21.624 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:21.883 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.CilNb6IJmL 00:21:21.883 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:22.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:22.141 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=89701 00:21:22.141 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:22.141 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:22.141 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 89701 /var/tmp/bdevperf.sock 00:21:22.141 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 89701 ']' 00:21:22.141 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:22.141 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:22.141 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:22.141 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:22.141 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:22.141 [2024-11-06 14:25:49.724286] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:21:22.141 [2024-11-06 14:25:49.724398] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89701 ] 00:21:22.399 [2024-11-06 14:25:49.879166] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:22.399 [2024-11-06 14:25:49.959462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:22.964 14:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:22.964 14:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:21:22.965 14:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.CilNb6IJmL 00:21:23.222 14:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:23.480 [2024-11-06 14:25:50.967631] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:23.480 nvme0n1 00:21:23.480 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:23.739 Running I/O for 1 seconds... 00:21:24.673 5808.00 IOPS, 22.69 MiB/s 00:21:24.673 Latency(us) 00:21:24.673 [2024-11-06T14:25:52.308Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:24.673 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:24.673 Verification LBA range: start 0x0 length 0x2000 00:21:24.673 nvme0n1 : 1.01 5861.38 22.90 0.00 0.00 21682.68 4738.76 16232.76 00:21:24.673 [2024-11-06T14:25:52.308Z] =================================================================================================================== 00:21:24.673 [2024-11-06T14:25:52.308Z] Total : 5861.38 22.90 0.00 0.00 21682.68 4738.76 16232.76 00:21:24.673 { 00:21:24.673 "results": [ 00:21:24.673 { 00:21:24.673 "job": "nvme0n1", 00:21:24.673 "core_mask": "0x2", 00:21:24.673 "workload": "verify", 00:21:24.673 "status": "finished", 00:21:24.673 "verify_range": { 00:21:24.673 "start": 0, 00:21:24.673 "length": 8192 00:21:24.673 }, 00:21:24.673 "queue_depth": 128, 00:21:24.673 "io_size": 4096, 00:21:24.673 "runtime": 1.012901, 00:21:24.673 "iops": 5861.382306859209, 00:21:24.673 "mibps": 22.896024636168786, 00:21:24.673 "io_failed": 0, 00:21:24.673 "io_timeout": 0, 00:21:24.673 "avg_latency_us": 21682.682102590017, 00:21:24.673 "min_latency_us": 4738.756923076923, 00:21:24.673 "max_latency_us": 16232.763076923076 00:21:24.673 } 00:21:24.673 ], 00:21:24.673 "core_count": 1 00:21:24.673 } 00:21:24.673 14:25:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 89701 00:21:24.673 14:25:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 89701 ']' 00:21:24.673 14:25:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 89701 00:21:24.673 14:25:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:21:24.673 14:25:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:24.674 14:25:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 89701 00:21:24.674 killing process with pid 89701 00:21:24.674 Received shutdown signal, test time was about 1.000000 seconds 00:21:24.674 00:21:24.674 Latency(us) 00:21:24.674 [2024-11-06T14:25:52.309Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:24.674 [2024-11-06T14:25:52.309Z] =================================================================================================================== 00:21:24.674 [2024-11-06T14:25:52.309Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:24.674 14:25:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:21:24.674 14:25:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:21:24.674 14:25:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 89701' 00:21:24.674 14:25:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 89701 00:21:24.674 14:25:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 89701 00:21:25.239 14:25:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 89597 00:21:25.239 14:25:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 89597 ']' 00:21:25.239 14:25:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 89597 00:21:25.239 14:25:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:21:25.239 14:25:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:25.239 14:25:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 89597 00:21:25.239 killing process with pid 89597 00:21:25.239 14:25:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:25.239 14:25:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:25.239 14:25:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 89597' 00:21:25.239 14:25:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 89597 00:21:25.239 14:25:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 89597 00:21:25.803 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:21:25.803 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:25.803 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:25.803 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:25.803 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=89784 00:21:25.803 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 89784 00:21:25.803 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:25.803 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 89784 ']' 00:21:25.803 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:25.803 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:25.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:25.803 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:25.803 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:25.803 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:26.061 [2024-11-06 14:25:53.513894] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:21:26.061 [2024-11-06 14:25:53.514027] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:26.061 [2024-11-06 14:25:53.671334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:26.320 [2024-11-06 14:25:53.766250] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:26.320 [2024-11-06 14:25:53.766299] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:26.320 [2024-11-06 14:25:53.766309] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:26.320 [2024-11-06 14:25:53.766323] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:26.320 [2024-11-06 14:25:53.766331] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:26.320 [2024-11-06 14:25:53.767064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:26.886 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:26.886 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:21:26.886 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:26.886 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:26.886 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:26.886 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:26.886 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:21:26.886 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.886 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:26.886 [2024-11-06 14:25:54.392689] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:26.886 malloc0 00:21:26.886 [2024-11-06 14:25:54.434241] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:26.886 [2024-11-06 14:25:54.434459] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:26.886 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.886 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=89834 00:21:26.886 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 89834 /var/tmp/bdevperf.sock 00:21:26.886 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 89834 ']' 00:21:26.886 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:26.886 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:26.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:26.886 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:26.886 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:26.886 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:26.886 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:27.143 [2024-11-06 14:25:54.521525] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:21:27.143 [2024-11-06 14:25:54.521618] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89834 ] 00:21:27.143 [2024-11-06 14:25:54.670741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:27.143 [2024-11-06 14:25:54.751746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:28.076 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:28.076 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:21:28.076 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.CilNb6IJmL 00:21:28.076 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:28.333 [2024-11-06 14:25:55.762527] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:28.333 nvme0n1 00:21:28.333 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:28.333 Running I/O for 1 seconds... 00:21:29.714 5735.00 IOPS, 22.40 MiB/s 00:21:29.714 Latency(us) 00:21:29.714 [2024-11-06T14:25:57.349Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:29.714 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:29.714 Verification LBA range: start 0x0 length 0x2000 00:21:29.714 nvme0n1 : 1.01 5799.11 22.65 0.00 0.00 21915.48 4058.19 17241.01 00:21:29.714 [2024-11-06T14:25:57.349Z] =================================================================================================================== 00:21:29.714 [2024-11-06T14:25:57.349Z] Total : 5799.11 22.65 0.00 0.00 21915.48 4058.19 17241.01 00:21:29.714 { 00:21:29.714 "results": [ 00:21:29.714 { 00:21:29.714 "job": "nvme0n1", 00:21:29.714 "core_mask": "0x2", 00:21:29.714 "workload": "verify", 00:21:29.714 "status": "finished", 00:21:29.714 "verify_range": { 00:21:29.714 "start": 0, 00:21:29.714 "length": 8192 00:21:29.714 }, 00:21:29.714 "queue_depth": 128, 00:21:29.714 "io_size": 4096, 00:21:29.714 "runtime": 1.01119, 00:21:29.714 "iops": 5799.107981684945, 00:21:29.714 "mibps": 22.652765553456817, 00:21:29.714 "io_failed": 0, 00:21:29.714 "io_timeout": 0, 00:21:29.714 "avg_latency_us": 21915.48487354392, 00:21:29.714 "min_latency_us": 4058.190769230769, 00:21:29.714 "max_latency_us": 17241.009230769232 00:21:29.714 } 00:21:29.714 ], 00:21:29.714 "core_count": 1 00:21:29.714 } 00:21:29.714 14:25:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:21:29.714 14:25:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.714 14:25:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:29.714 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.714 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:21:29.714 "subsystems": [ 00:21:29.714 { 00:21:29.714 "subsystem": "keyring", 00:21:29.714 "config": [ 00:21:29.714 { 00:21:29.714 "method": "keyring_file_add_key", 00:21:29.714 "params": { 00:21:29.714 "name": "key0", 00:21:29.714 "path": "/tmp/tmp.CilNb6IJmL" 00:21:29.714 } 00:21:29.714 } 00:21:29.714 ] 00:21:29.714 }, 00:21:29.714 { 00:21:29.714 "subsystem": "iobuf", 00:21:29.714 "config": [ 00:21:29.714 { 00:21:29.714 "method": "iobuf_set_options", 00:21:29.714 "params": { 00:21:29.714 "enable_numa": false, 00:21:29.714 "large_bufsize": 135168, 00:21:29.714 "large_pool_count": 1024, 00:21:29.714 "small_bufsize": 8192, 00:21:29.714 "small_pool_count": 8192 00:21:29.714 } 00:21:29.714 } 00:21:29.714 ] 00:21:29.714 }, 00:21:29.714 { 00:21:29.714 "subsystem": "sock", 00:21:29.714 "config": [ 00:21:29.714 { 00:21:29.714 "method": "sock_set_default_impl", 00:21:29.714 "params": { 00:21:29.714 "impl_name": "posix" 00:21:29.714 } 00:21:29.714 }, 00:21:29.714 { 00:21:29.714 "method": "sock_impl_set_options", 00:21:29.714 "params": { 00:21:29.714 "enable_ktls": false, 00:21:29.714 "enable_placement_id": 0, 00:21:29.714 "enable_quickack": false, 00:21:29.714 "enable_recv_pipe": true, 00:21:29.714 "enable_zerocopy_send_client": false, 00:21:29.714 "enable_zerocopy_send_server": true, 00:21:29.714 "impl_name": "ssl", 00:21:29.714 "recv_buf_size": 4096, 00:21:29.714 "send_buf_size": 4096, 00:21:29.714 "tls_version": 0, 00:21:29.714 "zerocopy_threshold": 0 00:21:29.714 } 00:21:29.714 }, 00:21:29.715 { 00:21:29.715 "method": "sock_impl_set_options", 00:21:29.715 "params": { 00:21:29.715 "enable_ktls": false, 00:21:29.715 "enable_placement_id": 0, 00:21:29.715 "enable_quickack": false, 00:21:29.715 "enable_recv_pipe": true, 00:21:29.715 "enable_zerocopy_send_client": false, 00:21:29.715 "enable_zerocopy_send_server": true, 00:21:29.715 "impl_name": "posix", 00:21:29.715 "recv_buf_size": 2097152, 00:21:29.715 "send_buf_size": 2097152, 00:21:29.715 "tls_version": 0, 00:21:29.715 "zerocopy_threshold": 0 00:21:29.715 } 00:21:29.715 } 00:21:29.715 ] 00:21:29.715 }, 00:21:29.715 { 00:21:29.715 "subsystem": "vmd", 00:21:29.715 "config": [] 00:21:29.715 }, 00:21:29.715 { 00:21:29.715 "subsystem": "accel", 00:21:29.715 "config": [ 00:21:29.715 { 00:21:29.715 "method": "accel_set_options", 00:21:29.715 "params": { 00:21:29.715 "buf_count": 2048, 00:21:29.715 "large_cache_size": 16, 00:21:29.715 "sequence_count": 2048, 00:21:29.715 "small_cache_size": 128, 00:21:29.715 "task_count": 2048 00:21:29.715 } 00:21:29.715 } 00:21:29.715 ] 00:21:29.715 }, 00:21:29.715 { 00:21:29.715 "subsystem": "bdev", 00:21:29.715 "config": [ 00:21:29.715 { 00:21:29.715 "method": "bdev_set_options", 00:21:29.715 "params": { 00:21:29.715 "bdev_auto_examine": true, 00:21:29.715 "bdev_io_cache_size": 256, 00:21:29.715 "bdev_io_pool_size": 65535, 00:21:29.715 "iobuf_large_cache_size": 16, 00:21:29.715 "iobuf_small_cache_size": 128 00:21:29.715 } 00:21:29.715 }, 00:21:29.715 { 00:21:29.715 "method": "bdev_raid_set_options", 00:21:29.715 "params": { 00:21:29.715 "process_max_bandwidth_mb_sec": 0, 00:21:29.715 "process_window_size_kb": 1024 00:21:29.715 } 00:21:29.715 }, 00:21:29.715 { 00:21:29.715 "method": "bdev_iscsi_set_options", 00:21:29.715 "params": { 00:21:29.715 "timeout_sec": 30 00:21:29.715 } 00:21:29.715 }, 00:21:29.715 { 00:21:29.715 "method": "bdev_nvme_set_options", 00:21:29.715 "params": { 00:21:29.715 "action_on_timeout": "none", 00:21:29.715 "allow_accel_sequence": false, 00:21:29.715 "arbitration_burst": 0, 00:21:29.715 "bdev_retry_count": 3, 00:21:29.715 "ctrlr_loss_timeout_sec": 0, 00:21:29.715 "delay_cmd_submit": true, 00:21:29.715 "dhchap_dhgroups": [ 00:21:29.715 "null", 00:21:29.715 "ffdhe2048", 00:21:29.715 "ffdhe3072", 00:21:29.715 "ffdhe4096", 00:21:29.715 "ffdhe6144", 00:21:29.715 "ffdhe8192" 00:21:29.715 ], 00:21:29.715 "dhchap_digests": [ 00:21:29.715 "sha256", 00:21:29.715 "sha384", 00:21:29.715 "sha512" 00:21:29.715 ], 00:21:29.715 "disable_auto_failback": false, 00:21:29.715 "fast_io_fail_timeout_sec": 0, 00:21:29.715 "generate_uuids": false, 00:21:29.715 "high_priority_weight": 0, 00:21:29.715 "io_path_stat": false, 00:21:29.715 "io_queue_requests": 0, 00:21:29.715 "keep_alive_timeout_ms": 10000, 00:21:29.715 "low_priority_weight": 0, 00:21:29.715 "medium_priority_weight": 0, 00:21:29.715 "nvme_adminq_poll_period_us": 10000, 00:21:29.715 "nvme_error_stat": false, 00:21:29.715 "nvme_ioq_poll_period_us": 0, 00:21:29.715 "rdma_cm_event_timeout_ms": 0, 00:21:29.715 "rdma_max_cq_size": 0, 00:21:29.715 "rdma_srq_size": 0, 00:21:29.715 "reconnect_delay_sec": 0, 00:21:29.715 "timeout_admin_us": 0, 00:21:29.715 "timeout_us": 0, 00:21:29.715 "transport_ack_timeout": 0, 00:21:29.715 "transport_retry_count": 4, 00:21:29.715 "transport_tos": 0 00:21:29.715 } 00:21:29.715 }, 00:21:29.715 { 00:21:29.715 "method": "bdev_nvme_set_hotplug", 00:21:29.715 "params": { 00:21:29.715 "enable": false, 00:21:29.715 "period_us": 100000 00:21:29.715 } 00:21:29.715 }, 00:21:29.715 { 00:21:29.715 "method": "bdev_malloc_create", 00:21:29.715 "params": { 00:21:29.715 "block_size": 4096, 00:21:29.715 "dif_is_head_of_md": false, 00:21:29.715 "dif_pi_format": 0, 00:21:29.715 "dif_type": 0, 00:21:29.715 "md_size": 0, 00:21:29.715 "name": "malloc0", 00:21:29.715 "num_blocks": 8192, 00:21:29.715 "optimal_io_boundary": 0, 00:21:29.715 "physical_block_size": 4096, 00:21:29.715 "uuid": "4892d459-5b8c-4ae8-ae69-250e4a62b139" 00:21:29.715 } 00:21:29.715 }, 00:21:29.715 { 00:21:29.715 "method": "bdev_wait_for_examine" 00:21:29.715 } 00:21:29.715 ] 00:21:29.715 }, 00:21:29.715 { 00:21:29.715 "subsystem": "nbd", 00:21:29.715 "config": [] 00:21:29.715 }, 00:21:29.715 { 00:21:29.715 "subsystem": "scheduler", 00:21:29.715 "config": [ 00:21:29.715 { 00:21:29.715 "method": "framework_set_scheduler", 00:21:29.715 "params": { 00:21:29.715 "name": "static" 00:21:29.715 } 00:21:29.715 } 00:21:29.715 ] 00:21:29.715 }, 00:21:29.715 { 00:21:29.715 "subsystem": "nvmf", 00:21:29.715 "config": [ 00:21:29.715 { 00:21:29.715 "method": "nvmf_set_config", 00:21:29.715 "params": { 00:21:29.715 "admin_cmd_passthru": { 00:21:29.715 "identify_ctrlr": false 00:21:29.715 }, 00:21:29.715 "dhchap_dhgroups": [ 00:21:29.715 "null", 00:21:29.715 "ffdhe2048", 00:21:29.715 "ffdhe3072", 00:21:29.715 "ffdhe4096", 00:21:29.715 "ffdhe6144", 00:21:29.715 "ffdhe8192" 00:21:29.715 ], 00:21:29.715 "dhchap_digests": [ 00:21:29.715 "sha256", 00:21:29.715 "sha384", 00:21:29.715 "sha512" 00:21:29.715 ], 00:21:29.715 "discovery_filter": "match_any" 00:21:29.715 } 00:21:29.715 }, 00:21:29.715 { 00:21:29.715 "method": "nvmf_set_max_subsystems", 00:21:29.715 "params": { 00:21:29.715 "max_subsystems": 1024 00:21:29.715 } 00:21:29.715 }, 00:21:29.715 { 00:21:29.715 "method": "nvmf_set_crdt", 00:21:29.715 "params": { 00:21:29.715 "crdt1": 0, 00:21:29.715 "crdt2": 0, 00:21:29.715 "crdt3": 0 00:21:29.715 } 00:21:29.715 }, 00:21:29.715 { 00:21:29.715 "method": "nvmf_create_transport", 00:21:29.715 "params": { 00:21:29.715 "abort_timeout_sec": 1, 00:21:29.715 "ack_timeout": 0, 00:21:29.715 "buf_cache_size": 4294967295, 00:21:29.715 "c2h_success": false, 00:21:29.715 "data_wr_pool_size": 0, 00:21:29.715 "dif_insert_or_strip": false, 00:21:29.715 "in_capsule_data_size": 4096, 00:21:29.715 "io_unit_size": 131072, 00:21:29.715 "max_aq_depth": 128, 00:21:29.715 "max_io_qpairs_per_ctrlr": 127, 00:21:29.715 "max_io_size": 131072, 00:21:29.715 "max_queue_depth": 128, 00:21:29.715 "num_shared_buffers": 511, 00:21:29.715 "sock_priority": 0, 00:21:29.715 "trtype": "TCP", 00:21:29.715 "zcopy": false 00:21:29.715 } 00:21:29.715 }, 00:21:29.715 { 00:21:29.715 "method": "nvmf_create_subsystem", 00:21:29.715 "params": { 00:21:29.715 "allow_any_host": false, 00:21:29.715 "ana_reporting": false, 00:21:29.715 "max_cntlid": 65519, 00:21:29.715 "max_namespaces": 32, 00:21:29.715 "min_cntlid": 1, 00:21:29.715 "model_number": "SPDK bdev Controller", 00:21:29.715 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:29.715 "serial_number": "00000000000000000000" 00:21:29.715 } 00:21:29.715 }, 00:21:29.715 { 00:21:29.715 "method": "nvmf_subsystem_add_host", 00:21:29.715 "params": { 00:21:29.715 "host": "nqn.2016-06.io.spdk:host1", 00:21:29.715 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:29.715 "psk": "key0" 00:21:29.715 } 00:21:29.715 }, 00:21:29.715 { 00:21:29.715 "method": "nvmf_subsystem_add_ns", 00:21:29.715 "params": { 00:21:29.715 "namespace": { 00:21:29.715 "bdev_name": "malloc0", 00:21:29.715 "nguid": "4892D4595B8C4AE8AE69250E4A62B139", 00:21:29.715 "no_auto_visible": false, 00:21:29.715 "nsid": 1, 00:21:29.715 "uuid": "4892d459-5b8c-4ae8-ae69-250e4a62b139" 00:21:29.715 }, 00:21:29.715 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:21:29.715 } 00:21:29.715 }, 00:21:29.715 { 00:21:29.715 "method": "nvmf_subsystem_add_listener", 00:21:29.715 "params": { 00:21:29.715 "listen_address": { 00:21:29.715 "adrfam": "IPv4", 00:21:29.715 "traddr": "10.0.0.3", 00:21:29.715 "trsvcid": "4420", 00:21:29.715 "trtype": "TCP" 00:21:29.715 }, 00:21:29.715 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:29.715 "secure_channel": false, 00:21:29.715 "sock_impl": "ssl" 00:21:29.715 } 00:21:29.715 } 00:21:29.715 ] 00:21:29.715 } 00:21:29.715 ] 00:21:29.715 }' 00:21:29.715 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:29.973 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:21:29.973 "subsystems": [ 00:21:29.973 { 00:21:29.973 "subsystem": "keyring", 00:21:29.973 "config": [ 00:21:29.973 { 00:21:29.973 "method": "keyring_file_add_key", 00:21:29.973 "params": { 00:21:29.973 "name": "key0", 00:21:29.973 "path": "/tmp/tmp.CilNb6IJmL" 00:21:29.973 } 00:21:29.973 } 00:21:29.973 ] 00:21:29.973 }, 00:21:29.973 { 00:21:29.973 "subsystem": "iobuf", 00:21:29.973 "config": [ 00:21:29.973 { 00:21:29.973 "method": "iobuf_set_options", 00:21:29.973 "params": { 00:21:29.973 "enable_numa": false, 00:21:29.973 "large_bufsize": 135168, 00:21:29.973 "large_pool_count": 1024, 00:21:29.973 "small_bufsize": 8192, 00:21:29.973 "small_pool_count": 8192 00:21:29.973 } 00:21:29.973 } 00:21:29.973 ] 00:21:29.973 }, 00:21:29.973 { 00:21:29.973 "subsystem": "sock", 00:21:29.973 "config": [ 00:21:29.973 { 00:21:29.973 "method": "sock_set_default_impl", 00:21:29.973 "params": { 00:21:29.973 "impl_name": "posix" 00:21:29.973 } 00:21:29.973 }, 00:21:29.973 { 00:21:29.973 "method": "sock_impl_set_options", 00:21:29.973 "params": { 00:21:29.973 "enable_ktls": false, 00:21:29.973 "enable_placement_id": 0, 00:21:29.973 "enable_quickack": false, 00:21:29.973 "enable_recv_pipe": true, 00:21:29.973 "enable_zerocopy_send_client": false, 00:21:29.973 "enable_zerocopy_send_server": true, 00:21:29.973 "impl_name": "ssl", 00:21:29.973 "recv_buf_size": 4096, 00:21:29.973 "send_buf_size": 4096, 00:21:29.973 "tls_version": 0, 00:21:29.973 "zerocopy_threshold": 0 00:21:29.973 } 00:21:29.973 }, 00:21:29.973 { 00:21:29.973 "method": "sock_impl_set_options", 00:21:29.973 "params": { 00:21:29.973 "enable_ktls": false, 00:21:29.973 "enable_placement_id": 0, 00:21:29.973 "enable_quickack": false, 00:21:29.973 "enable_recv_pipe": true, 00:21:29.973 "enable_zerocopy_send_client": false, 00:21:29.973 "enable_zerocopy_send_server": true, 00:21:29.973 "impl_name": "posix", 00:21:29.973 "recv_buf_size": 2097152, 00:21:29.973 "send_buf_size": 2097152, 00:21:29.973 "tls_version": 0, 00:21:29.973 "zerocopy_threshold": 0 00:21:29.973 } 00:21:29.973 } 00:21:29.973 ] 00:21:29.973 }, 00:21:29.973 { 00:21:29.973 "subsystem": "vmd", 00:21:29.973 "config": [] 00:21:29.973 }, 00:21:29.973 { 00:21:29.973 "subsystem": "accel", 00:21:29.974 "config": [ 00:21:29.974 { 00:21:29.974 "method": "accel_set_options", 00:21:29.974 "params": { 00:21:29.974 "buf_count": 2048, 00:21:29.974 "large_cache_size": 16, 00:21:29.974 "sequence_count": 2048, 00:21:29.974 "small_cache_size": 128, 00:21:29.974 "task_count": 2048 00:21:29.974 } 00:21:29.974 } 00:21:29.974 ] 00:21:29.974 }, 00:21:29.974 { 00:21:29.974 "subsystem": "bdev", 00:21:29.974 "config": [ 00:21:29.974 { 00:21:29.974 "method": "bdev_set_options", 00:21:29.974 "params": { 00:21:29.974 "bdev_auto_examine": true, 00:21:29.974 "bdev_io_cache_size": 256, 00:21:29.974 "bdev_io_pool_size": 65535, 00:21:29.974 "iobuf_large_cache_size": 16, 00:21:29.974 "iobuf_small_cache_size": 128 00:21:29.974 } 00:21:29.974 }, 00:21:29.974 { 00:21:29.974 "method": "bdev_raid_set_options", 00:21:29.974 "params": { 00:21:29.974 "process_max_bandwidth_mb_sec": 0, 00:21:29.974 "process_window_size_kb": 1024 00:21:29.974 } 00:21:29.974 }, 00:21:29.974 { 00:21:29.974 "method": "bdev_iscsi_set_options", 00:21:29.974 "params": { 00:21:29.974 "timeout_sec": 30 00:21:29.974 } 00:21:29.974 }, 00:21:29.974 { 00:21:29.974 "method": "bdev_nvme_set_options", 00:21:29.974 "params": { 00:21:29.974 "action_on_timeout": "none", 00:21:29.974 "allow_accel_sequence": false, 00:21:29.974 "arbitration_burst": 0, 00:21:29.974 "bdev_retry_count": 3, 00:21:29.974 "ctrlr_loss_timeout_sec": 0, 00:21:29.974 "delay_cmd_submit": true, 00:21:29.974 "dhchap_dhgroups": [ 00:21:29.974 "null", 00:21:29.974 "ffdhe2048", 00:21:29.974 "ffdhe3072", 00:21:29.974 "ffdhe4096", 00:21:29.974 "ffdhe6144", 00:21:29.974 "ffdhe8192" 00:21:29.974 ], 00:21:29.974 "dhchap_digests": [ 00:21:29.974 "sha256", 00:21:29.974 "sha384", 00:21:29.974 "sha512" 00:21:29.974 ], 00:21:29.974 "disable_auto_failback": false, 00:21:29.974 "fast_io_fail_timeout_sec": 0, 00:21:29.974 "generate_uuids": false, 00:21:29.974 "high_priority_weight": 0, 00:21:29.974 "io_path_stat": false, 00:21:29.974 "io_queue_requests": 512, 00:21:29.974 "keep_alive_timeout_ms": 10000, 00:21:29.974 "low_priority_weight": 0, 00:21:29.974 "medium_priority_weight": 0, 00:21:29.974 "nvme_adminq_poll_period_us": 10000, 00:21:29.974 "nvme_error_stat": false, 00:21:29.974 "nvme_ioq_poll_period_us": 0, 00:21:29.974 "rdma_cm_event_timeout_ms": 0, 00:21:29.974 "rdma_max_cq_size": 0, 00:21:29.974 "rdma_srq_size": 0, 00:21:29.974 "reconnect_delay_sec": 0, 00:21:29.974 "timeout_admin_us": 0, 00:21:29.974 "timeout_us": 0, 00:21:29.974 "transport_ack_timeout": 0, 00:21:29.974 "transport_retry_count": 4, 00:21:29.974 "transport_tos": 0 00:21:29.974 } 00:21:29.974 }, 00:21:29.974 { 00:21:29.974 "method": "bdev_nvme_attach_controller", 00:21:29.974 "params": { 00:21:29.974 "adrfam": "IPv4", 00:21:29.974 "ctrlr_loss_timeout_sec": 0, 00:21:29.974 "ddgst": false, 00:21:29.974 "fast_io_fail_timeout_sec": 0, 00:21:29.974 "hdgst": false, 00:21:29.974 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:29.974 "multipath": "multipath", 00:21:29.974 "name": "nvme0", 00:21:29.974 "prchk_guard": false, 00:21:29.974 "prchk_reftag": false, 00:21:29.974 "psk": "key0", 00:21:29.974 "reconnect_delay_sec": 0, 00:21:29.974 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:29.974 "traddr": "10.0.0.3", 00:21:29.974 "trsvcid": "4420", 00:21:29.974 "trtype": "TCP" 00:21:29.974 } 00:21:29.974 }, 00:21:29.974 { 00:21:29.974 "method": "bdev_nvme_set_hotplug", 00:21:29.974 "params": { 00:21:29.974 "enable": false, 00:21:29.974 "period_us": 100000 00:21:29.974 } 00:21:29.974 }, 00:21:29.974 { 00:21:29.974 "method": "bdev_enable_histogram", 00:21:29.974 "params": { 00:21:29.974 "enable": true, 00:21:29.974 "name": "nvme0n1" 00:21:29.974 } 00:21:29.974 }, 00:21:29.974 { 00:21:29.974 "method": "bdev_wait_for_examine" 00:21:29.974 } 00:21:29.974 ] 00:21:29.974 }, 00:21:29.974 { 00:21:29.974 "subsystem": "nbd", 00:21:29.974 "config": [] 00:21:29.974 } 00:21:29.974 ] 00:21:29.974 }' 00:21:29.974 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 89834 00:21:29.974 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 89834 ']' 00:21:29.974 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 89834 00:21:29.974 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:21:29.974 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:29.974 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 89834 00:21:29.974 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:21:29.974 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:21:29.974 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 89834' 00:21:29.974 killing process with pid 89834 00:21:29.974 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 89834 00:21:29.974 Received shutdown signal, test time was about 1.000000 seconds 00:21:29.974 00:21:29.974 Latency(us) 00:21:29.974 [2024-11-06T14:25:57.609Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:29.974 [2024-11-06T14:25:57.609Z] =================================================================================================================== 00:21:29.974 [2024-11-06T14:25:57.609Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:29.974 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 89834 00:21:30.539 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 89784 00:21:30.539 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 89784 ']' 00:21:30.539 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 89784 00:21:30.539 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:21:30.539 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:30.539 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 89784 00:21:30.539 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:30.539 killing process with pid 89784 00:21:30.539 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:30.540 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 89784' 00:21:30.540 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 89784 00:21:30.540 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 89784 00:21:31.106 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:21:31.106 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:31.106 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:31.106 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:31.106 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:21:31.106 "subsystems": [ 00:21:31.106 { 00:21:31.106 "subsystem": "keyring", 00:21:31.106 "config": [ 00:21:31.106 { 00:21:31.106 "method": "keyring_file_add_key", 00:21:31.106 "params": { 00:21:31.106 "name": "key0", 00:21:31.106 "path": "/tmp/tmp.CilNb6IJmL" 00:21:31.106 } 00:21:31.106 } 00:21:31.106 ] 00:21:31.106 }, 00:21:31.106 { 00:21:31.106 "subsystem": "iobuf", 00:21:31.106 "config": [ 00:21:31.106 { 00:21:31.106 "method": "iobuf_set_options", 00:21:31.106 "params": { 00:21:31.106 "enable_numa": false, 00:21:31.106 "large_bufsize": 135168, 00:21:31.106 "large_pool_count": 1024, 00:21:31.106 "small_bufsize": 8192, 00:21:31.106 "small_pool_count": 8192 00:21:31.106 } 00:21:31.106 } 00:21:31.106 ] 00:21:31.106 }, 00:21:31.106 { 00:21:31.106 "subsystem": "sock", 00:21:31.106 "config": [ 00:21:31.106 { 00:21:31.106 "method": "sock_set_default_impl", 00:21:31.106 "params": { 00:21:31.106 "impl_name": "posix" 00:21:31.106 } 00:21:31.106 }, 00:21:31.106 { 00:21:31.106 "method": "sock_impl_set_options", 00:21:31.106 "params": { 00:21:31.106 "enable_ktls": false, 00:21:31.106 "enable_placement_id": 0, 00:21:31.106 "enable_quickack": false, 00:21:31.106 "enable_recv_pipe": true, 00:21:31.106 "enable_zerocopy_send_client": false, 00:21:31.106 "enable_zerocopy_send_server": true, 00:21:31.106 "impl_name": "ssl", 00:21:31.106 "recv_buf_size": 4096, 00:21:31.106 "send_buf_size": 4096, 00:21:31.106 "tls_version": 0, 00:21:31.106 "zerocopy_threshold": 0 00:21:31.106 } 00:21:31.106 }, 00:21:31.106 { 00:21:31.106 "method": "sock_impl_set_options", 00:21:31.106 "params": { 00:21:31.106 "enable_ktls": false, 00:21:31.106 "enable_placement_id": 0, 00:21:31.106 "enable_quickack": false, 00:21:31.106 "enable_recv_pipe": true, 00:21:31.106 "enable_zerocopy_send_client": false, 00:21:31.106 "enable_zerocopy_send_server": true, 00:21:31.106 "impl_name": "posix", 00:21:31.106 "recv_buf_size": 2097152, 00:21:31.106 "send_buf_size": 2097152, 00:21:31.106 "tls_version": 0, 00:21:31.106 "zerocopy_threshold": 0 00:21:31.106 } 00:21:31.106 } 00:21:31.106 ] 00:21:31.106 }, 00:21:31.106 { 00:21:31.106 "subsystem": "vmd", 00:21:31.106 "config": [] 00:21:31.106 }, 00:21:31.106 { 00:21:31.106 "subsystem": "accel", 00:21:31.106 "config": [ 00:21:31.106 { 00:21:31.106 "method": "accel_set_options", 00:21:31.106 "params": { 00:21:31.106 "buf_count": 2048, 00:21:31.106 "large_cache_size": 16, 00:21:31.106 "sequence_count": 2048, 00:21:31.106 "small_cache_size": 128, 00:21:31.106 "task_count": 2048 00:21:31.106 } 00:21:31.106 } 00:21:31.106 ] 00:21:31.106 }, 00:21:31.106 { 00:21:31.106 "subsystem": "bdev", 00:21:31.106 "config": [ 00:21:31.106 { 00:21:31.106 "method": "bdev_set_options", 00:21:31.106 "params": { 00:21:31.106 "bdev_auto_examine": true, 00:21:31.106 "bdev_io_cache_size": 256, 00:21:31.106 "bdev_io_pool_size": 65535, 00:21:31.106 "iobuf_large_cache_size": 16, 00:21:31.106 "iobuf_small_cache_size": 128 00:21:31.106 } 00:21:31.106 }, 00:21:31.106 { 00:21:31.106 "method": "bdev_raid_set_options", 00:21:31.106 "params": { 00:21:31.106 "process_max_bandwidth_mb_sec": 0, 00:21:31.106 "process_window_size_kb": 1024 00:21:31.106 } 00:21:31.106 }, 00:21:31.106 { 00:21:31.106 "method": "bdev_iscsi_set_options", 00:21:31.106 "params": { 00:21:31.106 "timeout_sec": 30 00:21:31.106 } 00:21:31.106 }, 00:21:31.106 { 00:21:31.106 "method": "bdev_nvme_set_options", 00:21:31.106 "params": { 00:21:31.106 "action_on_timeout": "none", 00:21:31.106 "allow_accel_sequence": false, 00:21:31.106 "arbitration_burst": 0, 00:21:31.106 "bdev_retry_count": 3, 00:21:31.106 "ctrlr_loss_timeout_sec": 0, 00:21:31.106 "delay_cmd_submit": true, 00:21:31.106 "dhchap_dhgroups": [ 00:21:31.106 "null", 00:21:31.106 "ffdhe2048", 00:21:31.106 "ffdhe3072", 00:21:31.106 "ffdhe4096", 00:21:31.106 "ffdhe6144", 00:21:31.106 "ffdhe8192" 00:21:31.106 ], 00:21:31.106 "dhchap_digests": [ 00:21:31.106 "sha256", 00:21:31.106 "sha384", 00:21:31.106 "sha512" 00:21:31.106 ], 00:21:31.106 "disable_auto_failback": false, 00:21:31.106 "fast_io_fail_timeout_sec": 0, 00:21:31.106 "generate_uuids": false, 00:21:31.106 "high_priority_weight": 0, 00:21:31.106 "io_path_stat": false, 00:21:31.106 "io_queue_requests": 0, 00:21:31.106 "keep_alive_timeout_ms": 10000, 00:21:31.106 "low_priority_weight": 0, 00:21:31.106 "medium_priority_weight": 0, 00:21:31.106 "nvme_adminq_poll_period_us": 10000, 00:21:31.106 "nvme_error_stat": false, 00:21:31.106 "nvme_ioq_poll_period_us": 0, 00:21:31.106 "rdma_cm_event_timeout_ms": 0, 00:21:31.106 "rdma_max_cq_size": 0, 00:21:31.106 "rdma_srq_size": 0, 00:21:31.106 "reconnect_delay_sec": 0, 00:21:31.106 "timeout_admin_us": 0, 00:21:31.106 "timeout_us": 0, 00:21:31.106 "transport_ack_timeout": 0, 00:21:31.106 "transport_retry_count": 4, 00:21:31.106 "transport_tos": 0 00:21:31.106 } 00:21:31.106 }, 00:21:31.106 { 00:21:31.106 "method": "bdev_nvme_set_hotplug", 00:21:31.106 "params": { 00:21:31.106 "enable": false, 00:21:31.106 "period_us": 100000 00:21:31.106 } 00:21:31.106 }, 00:21:31.106 { 00:21:31.106 "method": "bdev_malloc_create", 00:21:31.106 "params": { 00:21:31.106 "block_size": 4096, 00:21:31.106 "dif_is_head_of_md": false, 00:21:31.106 "dif_pi_format": 0, 00:21:31.106 "dif_type": 0, 00:21:31.106 "md_size": 0, 00:21:31.106 "name": "malloc0", 00:21:31.106 "num_blocks": 8192, 00:21:31.106 "optimal_io_boundary": 0, 00:21:31.106 "physical_block_size": 4096, 00:21:31.106 "uuid": "4892d459-5b8c-4ae8-ae69-250e4a62b139" 00:21:31.106 } 00:21:31.106 }, 00:21:31.106 { 00:21:31.106 "method": "bdev_wait_for_examine" 00:21:31.106 } 00:21:31.106 ] 00:21:31.106 }, 00:21:31.106 { 00:21:31.106 "subsystem": "nbd", 00:21:31.106 "config": [] 00:21:31.106 }, 00:21:31.106 { 00:21:31.106 "subsystem": "scheduler", 00:21:31.106 "config": [ 00:21:31.106 { 00:21:31.106 "method": "framework_set_scheduler", 00:21:31.106 "params": { 00:21:31.106 "name": "static" 00:21:31.106 } 00:21:31.106 } 00:21:31.106 ] 00:21:31.106 }, 00:21:31.106 { 00:21:31.106 "subsystem": "nvmf", 00:21:31.106 "config": [ 00:21:31.106 { 00:21:31.106 "method": "nvmf_set_config", 00:21:31.106 "params": { 00:21:31.106 "admin_cmd_passthru": { 00:21:31.106 "identify_ctrlr": false 00:21:31.106 }, 00:21:31.106 "dhchap_dhgroups": [ 00:21:31.106 "null", 00:21:31.106 "ffdhe2048", 00:21:31.106 "ffdhe3072", 00:21:31.106 "ffdhe4096", 00:21:31.106 "ffdhe6144", 00:21:31.106 "ffdhe8192" 00:21:31.106 ], 00:21:31.106 "dhchap_digests": [ 00:21:31.106 "sha256", 00:21:31.106 "sha384", 00:21:31.106 "sha512" 00:21:31.106 ], 00:21:31.106 "discovery_filter": "match_any" 00:21:31.106 } 00:21:31.106 }, 00:21:31.106 { 00:21:31.107 "method": "nvmf_set_max_subsystems", 00:21:31.107 "params": { 00:21:31.107 "max_subsystems": 1024 00:21:31.107 } 00:21:31.107 }, 00:21:31.107 { 00:21:31.107 "method": "nvmf_set_crdt", 00:21:31.107 "params": { 00:21:31.107 "crdt1": 0, 00:21:31.107 "crdt2": 0, 00:21:31.107 "crdt3": 0 00:21:31.107 } 00:21:31.107 }, 00:21:31.107 { 00:21:31.107 "method": "nvmf_create_transport", 00:21:31.107 "params": { 00:21:31.107 "abort_timeout_sec": 1, 00:21:31.107 "ack_timeout": 0, 00:21:31.107 "buf_cache_size": 4294967295, 00:21:31.107 "c2h_success": false, 00:21:31.107 "data_wr_pool_size": 0, 00:21:31.107 "dif_insert_or_strip": false, 00:21:31.107 "in_capsule_data_size": 4096, 00:21:31.107 "io_unit_size": 131072, 00:21:31.107 "max_aq_depth": 128, 00:21:31.107 "max_io_qpairs_per_ctrlr": 127, 00:21:31.107 "max_io_size": 131072, 00:21:31.107 "max_queue_depth": 128, 00:21:31.107 "num_shared_buffers": 511, 00:21:31.107 "sock_priority": 0, 00:21:31.107 "trtype": "TCP", 00:21:31.107 "zcopy": false 00:21:31.107 } 00:21:31.107 }, 00:21:31.107 { 00:21:31.107 "method": "nvmf_create_subsystem", 00:21:31.107 "params": { 00:21:31.107 "allow_any_host": false, 00:21:31.107 "ana_reporting": false, 00:21:31.107 "max_cntlid": 65519, 00:21:31.107 "max_namespaces": 32, 00:21:31.107 "min_cntlid": 1, 00:21:31.107 "model_number": "SPDK bdev Controller", 00:21:31.107 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:31.107 "serial_number": "00000000000000000000" 00:21:31.107 } 00:21:31.107 }, 00:21:31.107 { 00:21:31.107 "method": "nvmf_subsystem_add_host", 00:21:31.107 "params": { 00:21:31.107 "host": "nqn.2016-06.io.spdk:host1", 00:21:31.107 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:31.107 "psk": "key0" 00:21:31.107 } 00:21:31.107 }, 00:21:31.107 { 00:21:31.107 "method": "nvmf_subsystem_add_ns", 00:21:31.107 "params": { 00:21:31.107 "namespace": { 00:21:31.107 "bdev_name": "malloc0", 00:21:31.107 "nguid": "4892D4595B8C4AE8AE69250E4A62B139", 00:21:31.107 "no_auto_visible": false, 00:21:31.107 "nsid": 1, 00:21:31.107 "uuid": "4892d459-5b8c-4ae8-ae69-250e4a62b139" 00:21:31.107 }, 00:21:31.107 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:21:31.107 } 00:21:31.107 }, 00:21:31.107 { 00:21:31.107 "method": "nvmf_subsystem_add_listener", 00:21:31.107 "params": { 00:21:31.107 "listen_address": { 00:21:31.107 "adrfam": "IPv4", 00:21:31.107 "traddr": "10.0.0.3", 00:21:31.107 "trsvcid": "4420", 00:21:31.107 "trtype": "TCP" 00:21:31.107 }, 00:21:31.107 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:31.107 "secure_channel": false, 00:21:31.107 "sock_impl": "ssl" 00:21:31.107 } 00:21:31.107 } 00:21:31.107 ] 00:21:31.107 } 00:21:31.107 ] 00:21:31.107 }' 00:21:31.107 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=89932 00:21:31.107 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:21:31.107 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 89932 00:21:31.107 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 89932 ']' 00:21:31.107 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:31.107 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:31.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:31.107 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:31.107 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:31.107 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:31.107 [2024-11-06 14:25:58.715442] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:21:31.107 [2024-11-06 14:25:58.715541] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:31.366 [2024-11-06 14:25:58.867105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:31.366 [2024-11-06 14:25:58.964852] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:31.366 [2024-11-06 14:25:58.964905] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:31.366 [2024-11-06 14:25:58.964915] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:31.366 [2024-11-06 14:25:58.964929] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:31.366 [2024-11-06 14:25:58.964939] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:31.366 [2024-11-06 14:25:58.965722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:31.931 [2024-11-06 14:25:59.342659] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:31.931 [2024-11-06 14:25:59.374593] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:31.931 [2024-11-06 14:25:59.374796] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:31.931 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:31.931 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:21:31.931 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:31.931 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:31.931 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:31.931 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:31.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:31.931 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=89971 00:21:31.931 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 89971 /var/tmp/bdevperf.sock 00:21:31.931 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 89971 ']' 00:21:31.931 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:31.931 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:31.931 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:21:31.932 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:31.932 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:21:31.932 "subsystems": [ 00:21:31.932 { 00:21:31.932 "subsystem": "keyring", 00:21:31.932 "config": [ 00:21:31.932 { 00:21:31.932 "method": "keyring_file_add_key", 00:21:31.932 "params": { 00:21:31.932 "name": "key0", 00:21:31.932 "path": "/tmp/tmp.CilNb6IJmL" 00:21:31.932 } 00:21:31.932 } 00:21:31.932 ] 00:21:31.932 }, 00:21:31.932 { 00:21:31.932 "subsystem": "iobuf", 00:21:31.932 "config": [ 00:21:31.932 { 00:21:31.932 "method": "iobuf_set_options", 00:21:31.932 "params": { 00:21:31.932 "enable_numa": false, 00:21:31.932 "large_bufsize": 135168, 00:21:31.932 "large_pool_count": 1024, 00:21:31.932 "small_bufsize": 8192, 00:21:31.932 "small_pool_count": 8192 00:21:31.932 } 00:21:31.932 } 00:21:31.932 ] 00:21:31.932 }, 00:21:31.932 { 00:21:31.932 "subsystem": "sock", 00:21:31.932 "config": [ 00:21:31.932 { 00:21:31.932 "method": "sock_set_default_impl", 00:21:31.932 "params": { 00:21:31.932 "impl_name": "posix" 00:21:31.932 } 00:21:31.932 }, 00:21:31.932 { 00:21:31.932 "method": "sock_impl_set_options", 00:21:31.932 "params": { 00:21:31.932 "enable_ktls": false, 00:21:31.932 "enable_placement_id": 0, 00:21:31.932 "enable_quickack": false, 00:21:31.932 "enable_recv_pipe": true, 00:21:31.932 "enable_zerocopy_send_client": false, 00:21:31.932 "enable_zerocopy_send_server": true, 00:21:31.932 "impl_name": "ssl", 00:21:31.932 "recv_buf_size": 4096, 00:21:31.932 "send_buf_size": 4096, 00:21:31.932 "tls_version": 0, 00:21:31.932 "zerocopy_threshold": 0 00:21:31.932 } 00:21:31.932 }, 00:21:31.932 { 00:21:31.932 "method": "sock_impl_set_options", 00:21:31.932 "params": { 00:21:31.932 "enable_ktls": false, 00:21:31.932 "enable_placement_id": 0, 00:21:31.932 "enable_quickack": false, 00:21:31.932 "enable_recv_pipe": true, 00:21:31.932 "enable_zerocopy_send_client": false, 00:21:31.932 "enable_zerocopy_send_server": true, 00:21:31.932 "impl_name": "posix", 00:21:31.932 "recv_buf_size": 2097152, 00:21:31.932 "send_buf_size": 2097152, 00:21:31.932 "tls_version": 0, 00:21:31.932 "zerocopy_threshold": 0 00:21:31.932 } 00:21:31.932 } 00:21:31.932 ] 00:21:31.932 }, 00:21:31.932 { 00:21:31.932 "subsystem": "vmd", 00:21:31.932 "config": [] 00:21:31.932 }, 00:21:31.932 { 00:21:31.932 "subsystem": "accel", 00:21:31.932 "config": [ 00:21:31.932 { 00:21:31.932 "method": "accel_set_options", 00:21:31.932 "params": { 00:21:31.932 "buf_count": 2048, 00:21:31.932 "large_cache_size": 16, 00:21:31.932 "sequence_count": 2048, 00:21:31.932 "small_cache_size": 128, 00:21:31.932 "task_count": 2048 00:21:31.932 } 00:21:31.932 } 00:21:31.932 ] 00:21:31.932 }, 00:21:31.932 { 00:21:31.932 "subsystem": "bdev", 00:21:31.932 "config": [ 00:21:31.932 { 00:21:31.932 "method": "bdev_set_options", 00:21:31.932 "params": { 00:21:31.932 "bdev_auto_examine": true, 00:21:31.932 "bdev_io_cache_size": 256, 00:21:31.932 "bdev_io_pool_size": 65535, 00:21:31.932 "iobuf_large_cache_size": 16, 00:21:31.932 "iobuf_small_cache_size": 128 00:21:31.932 } 00:21:31.932 }, 00:21:31.932 { 00:21:31.932 "method": "bdev_raid_set_options", 00:21:31.932 "params": { 00:21:31.932 "process_max_bandwidth_mb_sec": 0, 00:21:31.932 "process_window_size_kb": 1024 00:21:31.932 } 00:21:31.932 }, 00:21:31.932 { 00:21:31.932 "method": "bdev_iscsi_set_options", 00:21:31.932 "params": { 00:21:31.932 "timeout_sec": 30 00:21:31.932 } 00:21:31.932 }, 00:21:31.932 { 00:21:31.932 "method": "bdev_nvme_set_options", 00:21:31.932 "params": { 00:21:31.932 "action_on_timeout": "none", 00:21:31.932 "allow_accel_sequence": false, 00:21:31.932 "arbitration_burst": 0, 00:21:31.932 "bdev_retry_count": 3, 00:21:31.932 "ctrlr_loss_timeout_sec": 0, 00:21:31.932 "delay_cmd_submit": true, 00:21:31.932 "dhchap_dhgroups": [ 00:21:31.932 "null", 00:21:31.932 "ffdhe2048", 00:21:31.932 "ffdhe3072", 00:21:31.932 "ffdhe4096", 00:21:31.932 "ffdhe6144", 00:21:31.932 "ffdhe8192" 00:21:31.932 ], 00:21:31.932 "dhchap_digests": [ 00:21:31.932 "sha256", 00:21:31.932 "sha384", 00:21:31.932 "sha512" 00:21:31.932 ], 00:21:31.932 "disable_auto_failback": false, 00:21:31.932 "fast_io_fail_timeout_sec": 0, 00:21:31.932 "generate_uuids": false, 00:21:31.932 "high_priority_weight": 0, 00:21:31.932 "io_path_stat": false, 00:21:31.932 "io_queue_requests": 512, 00:21:31.932 "keep_alive_timeout_ms": 10000, 00:21:31.932 "low_priority_weight": 0, 00:21:31.932 "medium_priority_weight": 0, 00:21:31.932 "nvme_adminq_poll_period_us": 10000, 00:21:31.932 "nvme_error_stat": false, 00:21:31.932 "nvme_ioq_poll_period_us": 0, 00:21:31.932 "rdma_cm_event_timeout_ms": 0, 00:21:31.932 "rdma_max_cq_size": 0, 00:21:31.932 "rdma_srq_size": 0, 00:21:31.932 "reconnect_delay_sec": 0, 00:21:31.932 "timeout_admin_us": 0, 00:21:31.932 "timeout_us": 0, 00:21:31.932 "transport_ack_timeout": 0, 00:21:31.932 "transport_retry_count": 4, 00:21:31.932 "transport_tos": 0 00:21:31.932 } 00:21:31.932 }, 00:21:31.932 { 00:21:31.932 "method": "bdev_nvme_attach_controller", 00:21:31.932 "params": { 00:21:31.932 "adrfam": "IPv4", 00:21:31.932 "ctrlr_loss_timeout_sec": 0, 00:21:31.932 "ddgst": false, 00:21:31.932 "fast_io_fail_timeout_sec": 0, 00:21:31.932 "hdgst": false, 00:21:31.932 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:31.932 "multipath": "multipath", 00:21:31.932 "name": "nvme0", 00:21:31.932 "prchk_guard": false, 00:21:31.932 "prchk_reftag": false, 00:21:31.932 "psk": "key0", 00:21:31.932 "reconnect_delay_sec": 0, 00:21:31.932 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:31.932 "traddr": "10.0.0.3", 00:21:31.932 "trsvcid": "4420", 00:21:31.932 "trtype": "TCP" 00:21:31.932 } 00:21:31.932 }, 00:21:31.932 { 00:21:31.932 "method": "bdev_nvme_set_hotplug", 00:21:31.932 "params": { 00:21:31.932 "enable": false, 00:21:31.932 "period_us": 100000 00:21:31.932 } 00:21:31.932 }, 00:21:31.932 { 00:21:31.932 "method": "bdev_enable_histogram", 00:21:31.932 "params": { 00:21:31.932 "enable": true, 00:21:31.932 "name": "nvme0n1" 00:21:31.932 } 00:21:31.932 }, 00:21:31.932 { 00:21:31.932 "method": "bdev_wait_for_examine" 00:21:31.932 } 00:21:31.932 ] 00:21:31.932 }, 00:21:31.932 { 00:21:31.932 "subsystem": "nbd", 00:21:31.932 "config": [] 00:21:31.932 } 00:21:31.932 ] 00:21:31.932 }' 00:21:31.932 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:31.932 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:32.189 [2024-11-06 14:25:59.585818] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:21:32.189 [2024-11-06 14:25:59.585928] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89971 ] 00:21:32.189 [2024-11-06 14:25:59.740440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:32.189 [2024-11-06 14:25:59.819399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:32.446 [2024-11-06 14:26:00.075882] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:33.012 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:33.013 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:21:33.013 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:33.013 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:21:33.013 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.013 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:33.270 Running I/O for 1 seconds... 00:21:34.205 5726.00 IOPS, 22.37 MiB/s 00:21:34.205 Latency(us) 00:21:34.205 [2024-11-06T14:26:01.840Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:34.205 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:34.205 Verification LBA range: start 0x0 length 0x2000 00:21:34.205 nvme0n1 : 1.01 5790.85 22.62 0.00 0.00 21960.71 4058.19 20971.52 00:21:34.205 [2024-11-06T14:26:01.840Z] =================================================================================================================== 00:21:34.205 [2024-11-06T14:26:01.840Z] Total : 5790.85 22.62 0.00 0.00 21960.71 4058.19 20971.52 00:21:34.205 { 00:21:34.205 "results": [ 00:21:34.205 { 00:21:34.205 "job": "nvme0n1", 00:21:34.205 "core_mask": "0x2", 00:21:34.205 "workload": "verify", 00:21:34.205 "status": "finished", 00:21:34.205 "verify_range": { 00:21:34.205 "start": 0, 00:21:34.205 "length": 8192 00:21:34.205 }, 00:21:34.205 "queue_depth": 128, 00:21:34.205 "io_size": 4096, 00:21:34.205 "runtime": 1.010905, 00:21:34.205 "iops": 5790.850772327765, 00:21:34.205 "mibps": 22.620510829405333, 00:21:34.205 "io_failed": 0, 00:21:34.205 "io_timeout": 0, 00:21:34.205 "avg_latency_us": 21960.712988357733, 00:21:34.205 "min_latency_us": 4058.190769230769, 00:21:34.205 "max_latency_us": 20971.52 00:21:34.205 } 00:21:34.205 ], 00:21:34.205 "core_count": 1 00:21:34.205 } 00:21:34.205 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:21:34.205 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:21:34.205 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:21:34.205 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # type=--id 00:21:34.205 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@811 -- # id=0 00:21:34.205 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:21:34.205 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:34.205 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:21:34.205 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:21:34.205 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@822 -- # for n in $shm_files 00:21:34.205 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:34.205 nvmf_trace.0 00:21:34.463 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # return 0 00:21:34.463 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 89971 00:21:34.463 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 89971 ']' 00:21:34.463 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 89971 00:21:34.463 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:21:34.463 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:34.463 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 89971 00:21:34.463 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:21:34.464 killing process with pid 89971 00:21:34.464 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:21:34.464 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 89971' 00:21:34.464 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 89971 00:21:34.464 Received shutdown signal, test time was about 1.000000 seconds 00:21:34.464 00:21:34.464 Latency(us) 00:21:34.464 [2024-11-06T14:26:02.099Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:34.464 [2024-11-06T14:26:02.099Z] =================================================================================================================== 00:21:34.464 [2024-11-06T14:26:02.099Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:34.464 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 89971 00:21:35.030 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:21:35.030 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:35.030 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:21:35.030 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:35.030 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:21:35.030 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:35.030 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:35.030 rmmod nvme_tcp 00:21:35.030 rmmod nvme_fabrics 00:21:35.030 rmmod nvme_keyring 00:21:35.030 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:35.030 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:21:35.030 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:21:35.030 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 89932 ']' 00:21:35.030 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 89932 00:21:35.030 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 89932 ']' 00:21:35.030 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 89932 00:21:35.030 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:21:35.030 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:35.030 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 89932 00:21:35.030 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:35.030 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:35.030 killing process with pid 89932 00:21:35.030 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 89932' 00:21:35.030 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 89932 00:21:35.030 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 89932 00:21:35.595 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:35.595 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:35.595 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:35.595 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:21:35.595 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:35.596 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:21:35.596 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:21:35.596 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:35.596 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:35.596 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:35.596 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:35.596 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:35.596 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:35.596 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:35.596 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:35.596 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:35.596 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:35.596 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:35.854 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:35.854 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:35.854 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:35.854 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:35.854 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:35.854 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:35.854 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:35.854 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:35.854 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:21:35.854 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.9ULJ6QqRXZ /tmp/tmp.bEooU5wntO /tmp/tmp.CilNb6IJmL 00:21:35.854 00:21:35.854 real 1m33.065s 00:21:35.854 user 2m30.977s 00:21:35.854 sys 0m24.502s 00:21:35.854 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:35.854 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:35.854 ************************************ 00:21:35.854 END TEST nvmf_tls 00:21:35.854 ************************************ 00:21:35.854 14:26:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:35.854 14:26:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:35.854 14:26:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:35.854 14:26:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:35.854 ************************************ 00:21:35.854 START TEST nvmf_fips 00:21:35.854 ************************************ 00:21:35.854 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:35.854 * Looking for test storage... 00:21:35.854 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:21:35.854 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:35.854 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:35.854 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lcov --version 00:21:36.113 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:36.113 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:36.113 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:36.113 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:36.113 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:21:36.113 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:21:36.113 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:21:36.113 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:36.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:36.114 --rc genhtml_branch_coverage=1 00:21:36.114 --rc genhtml_function_coverage=1 00:21:36.114 --rc genhtml_legend=1 00:21:36.114 --rc geninfo_all_blocks=1 00:21:36.114 --rc geninfo_unexecuted_blocks=1 00:21:36.114 00:21:36.114 ' 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:36.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:36.114 --rc genhtml_branch_coverage=1 00:21:36.114 --rc genhtml_function_coverage=1 00:21:36.114 --rc genhtml_legend=1 00:21:36.114 --rc geninfo_all_blocks=1 00:21:36.114 --rc geninfo_unexecuted_blocks=1 00:21:36.114 00:21:36.114 ' 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:36.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:36.114 --rc genhtml_branch_coverage=1 00:21:36.114 --rc genhtml_function_coverage=1 00:21:36.114 --rc genhtml_legend=1 00:21:36.114 --rc geninfo_all_blocks=1 00:21:36.114 --rc geninfo_unexecuted_blocks=1 00:21:36.114 00:21:36.114 ' 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:36.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:36.114 --rc genhtml_branch_coverage=1 00:21:36.114 --rc genhtml_function_coverage=1 00:21:36.114 --rc genhtml_legend=1 00:21:36.114 --rc geninfo_all_blocks=1 00:21:36.114 --rc geninfo_unexecuted_blocks=1 00:21:36.114 00:21:36.114 ' 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:36.114 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:21:36.114 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:21:36.115 Error setting digest 00:21:36.115 40A28102A97F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:21:36.115 40A28102A97F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:36.115 Cannot find device "nvmf_init_br" 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:36.115 Cannot find device "nvmf_init_br2" 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:36.115 Cannot find device "nvmf_tgt_br" 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:36.115 Cannot find device "nvmf_tgt_br2" 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:36.115 Cannot find device "nvmf_init_br" 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:36.115 Cannot find device "nvmf_init_br2" 00:21:36.115 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:21:36.116 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:36.116 Cannot find device "nvmf_tgt_br" 00:21:36.116 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:21:36.116 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:36.116 Cannot find device "nvmf_tgt_br2" 00:21:36.116 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:21:36.116 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:36.116 Cannot find device "nvmf_br" 00:21:36.116 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:21:36.116 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:36.116 Cannot find device "nvmf_init_if" 00:21:36.116 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:21:36.116 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:36.374 Cannot find device "nvmf_init_if2" 00:21:36.374 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:21:36.374 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:36.374 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:36.374 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:21:36.374 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:36.374 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:36.374 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:21:36.374 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:36.374 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:36.374 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:36.374 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:36.374 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:36.374 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:36.374 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:36.374 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:36.374 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:36.374 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:36.374 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:36.374 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:36.374 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:36.374 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:36.374 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:36.374 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:36.374 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:36.374 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:36.374 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:36.374 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:36.374 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:36.374 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:36.374 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:36.374 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:36.374 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:36.374 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:36.374 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:36.374 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:36.374 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:36.374 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:36.374 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:36.374 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:36.375 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:36.375 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:36.375 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:21:36.375 00:21:36.375 --- 10.0.0.3 ping statistics --- 00:21:36.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:36.375 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:21:36.375 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:36.375 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:36.375 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.030 ms 00:21:36.375 00:21:36.375 --- 10.0.0.4 ping statistics --- 00:21:36.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:36.375 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:21:36.375 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:36.375 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:36.375 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.014 ms 00:21:36.375 00:21:36.375 --- 10.0.0.1 ping statistics --- 00:21:36.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:36.375 rtt min/avg/max/mdev = 0.014/0.014/0.014/0.000 ms 00:21:36.375 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:36.375 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:36.375 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms 00:21:36.375 00:21:36.375 --- 10.0.0.2 ping statistics --- 00:21:36.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:36.375 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:21:36.375 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:36.375 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@461 -- # return 0 00:21:36.375 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:36.375 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:36.375 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:36.375 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:36.375 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:36.375 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:36.375 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:36.375 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:21:36.375 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:36.375 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:36.375 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:36.375 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=90311 00:21:36.375 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 90311 00:21:36.375 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:36.375 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 90311 ']' 00:21:36.375 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:36.375 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:36.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:36.375 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:36.375 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:36.375 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:36.653 [2024-11-06 14:26:04.027723] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:21:36.653 [2024-11-06 14:26:04.027837] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:36.653 [2024-11-06 14:26:04.187839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:36.930 [2024-11-06 14:26:04.286413] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:36.930 [2024-11-06 14:26:04.286472] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:36.930 [2024-11-06 14:26:04.286485] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:36.930 [2024-11-06 14:26:04.286492] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:36.930 [2024-11-06 14:26:04.286501] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:36.930 [2024-11-06 14:26:04.287390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:37.495 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:37.495 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:21:37.495 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:37.495 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:37.495 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:37.495 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:37.495 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:21:37.495 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:37.495 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:21:37.495 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.0SV 00:21:37.495 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:37.495 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.0SV 00:21:37.495 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.0SV 00:21:37.495 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.0SV 00:21:37.495 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:37.495 [2024-11-06 14:26:05.067993] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:37.495 [2024-11-06 14:26:05.083947] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:37.495 [2024-11-06 14:26:05.084156] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:37.753 malloc0 00:21:37.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:37.753 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:37.753 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=90365 00:21:37.753 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 90365 /var/tmp/bdevperf.sock 00:21:37.753 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 90365 ']' 00:21:37.753 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:37.753 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:37.753 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:37.753 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:37.753 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:37.753 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:37.753 [2024-11-06 14:26:05.237262] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:21:37.753 [2024-11-06 14:26:05.237350] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90365 ] 00:21:38.011 [2024-11-06 14:26:05.388693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:38.011 [2024-11-06 14:26:05.495615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:38.577 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:38.577 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:21:38.577 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.0SV 00:21:38.835 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:39.093 [2024-11-06 14:26:06.470594] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:39.093 TLSTESTn1 00:21:39.093 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:39.093 Running I/O for 10 seconds... 00:21:41.409 5350.00 IOPS, 20.90 MiB/s [2024-11-06T14:26:09.977Z] 5550.50 IOPS, 21.68 MiB/s [2024-11-06T14:26:10.909Z] 5432.67 IOPS, 21.22 MiB/s [2024-11-06T14:26:11.848Z] 5448.25 IOPS, 21.28 MiB/s [2024-11-06T14:26:12.781Z] 5448.40 IOPS, 21.28 MiB/s [2024-11-06T14:26:13.712Z] 5402.00 IOPS, 21.10 MiB/s [2024-11-06T14:26:15.085Z] 5409.14 IOPS, 21.13 MiB/s [2024-11-06T14:26:16.017Z] 5437.12 IOPS, 21.24 MiB/s [2024-11-06T14:26:16.950Z] 5459.11 IOPS, 21.32 MiB/s [2024-11-06T14:26:16.950Z] 5372.50 IOPS, 20.99 MiB/s 00:21:49.315 Latency(us) 00:21:49.315 [2024-11-06T14:26:16.950Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:49.315 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:49.315 Verification LBA range: start 0x0 length 0x2000 00:21:49.315 TLSTESTn1 : 10.02 5374.30 20.99 0.00 0.00 23768.77 6402.36 28432.54 00:21:49.315 [2024-11-06T14:26:16.950Z] =================================================================================================================== 00:21:49.315 [2024-11-06T14:26:16.950Z] Total : 5374.30 20.99 0.00 0.00 23768.77 6402.36 28432.54 00:21:49.315 { 00:21:49.315 "results": [ 00:21:49.315 { 00:21:49.315 "job": "TLSTESTn1", 00:21:49.315 "core_mask": "0x4", 00:21:49.315 "workload": "verify", 00:21:49.315 "status": "finished", 00:21:49.315 "verify_range": { 00:21:49.315 "start": 0, 00:21:49.315 "length": 8192 00:21:49.315 }, 00:21:49.315 "queue_depth": 128, 00:21:49.315 "io_size": 4096, 00:21:49.316 "runtime": 10.020474, 00:21:49.316 "iops": 5374.2966650080625, 00:21:49.316 "mibps": 20.993346347687744, 00:21:49.316 "io_failed": 0, 00:21:49.316 "io_timeout": 0, 00:21:49.316 "avg_latency_us": 23768.76886270174, 00:21:49.316 "min_latency_us": 6402.363076923077, 00:21:49.316 "max_latency_us": 28432.541538461537 00:21:49.316 } 00:21:49.316 ], 00:21:49.316 "core_count": 1 00:21:49.316 } 00:21:49.316 14:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:21:49.316 14:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:21:49.316 14:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # type=--id 00:21:49.316 14:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@811 -- # id=0 00:21:49.316 14:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:21:49.316 14:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:49.316 14:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:21:49.316 14:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:21:49.316 14:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@822 -- # for n in $shm_files 00:21:49.316 14:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:49.316 nvmf_trace.0 00:21:49.316 14:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # return 0 00:21:49.316 14:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 90365 00:21:49.316 14:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 90365 ']' 00:21:49.316 14:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 90365 00:21:49.316 14:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:21:49.316 14:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:49.316 14:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 90365 00:21:49.316 14:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:21:49.316 14:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:21:49.316 14:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 90365' 00:21:49.316 killing process with pid 90365 00:21:49.316 14:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 90365 00:21:49.316 14:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 90365 00:21:49.316 Received shutdown signal, test time was about 10.000000 seconds 00:21:49.316 00:21:49.316 Latency(us) 00:21:49.316 [2024-11-06T14:26:16.951Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:49.316 [2024-11-06T14:26:16.951Z] =================================================================================================================== 00:21:49.316 [2024-11-06T14:26:16.951Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:50.264 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:21:50.264 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:50.264 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:21:50.264 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:50.264 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:21:50.264 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:50.264 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:50.264 rmmod nvme_tcp 00:21:50.264 rmmod nvme_fabrics 00:21:50.264 rmmod nvme_keyring 00:21:50.264 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:50.264 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:21:50.264 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:21:50.264 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 90311 ']' 00:21:50.264 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 90311 00:21:50.264 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 90311 ']' 00:21:50.264 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 90311 00:21:50.264 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:21:50.264 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:50.264 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 90311 00:21:50.264 killing process with pid 90311 00:21:50.264 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:21:50.264 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:21:50.264 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 90311' 00:21:50.264 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 90311 00:21:50.264 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 90311 00:21:50.830 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:50.830 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:50.830 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:50.830 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:21:50.830 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:21:50.830 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:50.830 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:21:50.830 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:50.830 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:50.830 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:50.830 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:50.830 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:50.830 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:50.830 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:50.830 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:50.830 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:50.830 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:50.830 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:51.088 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:51.088 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:51.088 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:51.088 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:51.088 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:51.088 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:51.088 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:51.088 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:51.088 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:21:51.088 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.0SV 00:21:51.088 ************************************ 00:21:51.088 END TEST nvmf_fips 00:21:51.088 ************************************ 00:21:51.088 00:21:51.088 real 0m15.205s 00:21:51.088 user 0m21.823s 00:21:51.088 sys 0m5.003s 00:21:51.088 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:51.088 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:51.088 14:26:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:21:51.088 14:26:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:51.088 14:26:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:51.088 14:26:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:51.088 ************************************ 00:21:51.088 START TEST nvmf_control_msg_list 00:21:51.088 ************************************ 00:21:51.088 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:21:51.088 * Looking for test storage... 00:21:51.088 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:51.088 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:51.088 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lcov --version 00:21:51.088 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:51.347 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:51.348 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:51.348 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:51.348 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:51.348 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:21:51.348 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:21:51.348 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:21:51.348 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:21:51.348 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:21:51.348 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:21:51.348 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:21:51.348 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:51.348 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:21:51.348 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:21:51.348 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:51.348 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:51.348 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:21:51.348 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:21:51.348 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:51.348 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:21:51.348 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:21:51.348 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:21:51.348 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:21:51.348 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:51.348 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:21:51.348 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:21:51.348 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:51.348 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:51.348 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:21:51.348 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:51.348 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:51.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.348 --rc genhtml_branch_coverage=1 00:21:51.348 --rc genhtml_function_coverage=1 00:21:51.348 --rc genhtml_legend=1 00:21:51.348 --rc geninfo_all_blocks=1 00:21:51.348 --rc geninfo_unexecuted_blocks=1 00:21:51.348 00:21:51.348 ' 00:21:51.348 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:51.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.348 --rc genhtml_branch_coverage=1 00:21:51.348 --rc genhtml_function_coverage=1 00:21:51.348 --rc genhtml_legend=1 00:21:51.348 --rc geninfo_all_blocks=1 00:21:51.348 --rc geninfo_unexecuted_blocks=1 00:21:51.348 00:21:51.348 ' 00:21:51.348 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:51.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.348 --rc genhtml_branch_coverage=1 00:21:51.348 --rc genhtml_function_coverage=1 00:21:51.348 --rc genhtml_legend=1 00:21:51.348 --rc geninfo_all_blocks=1 00:21:51.348 --rc geninfo_unexecuted_blocks=1 00:21:51.348 00:21:51.348 ' 00:21:51.348 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:51.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.348 --rc genhtml_branch_coverage=1 00:21:51.348 --rc genhtml_function_coverage=1 00:21:51.348 --rc genhtml_legend=1 00:21:51.348 --rc geninfo_all_blocks=1 00:21:51.348 --rc geninfo_unexecuted_blocks=1 00:21:51.348 00:21:51.348 ' 00:21:51.348 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:51.348 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:21:51.348 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:51.348 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:51.348 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:51.348 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:51.348 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:51.348 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:51.348 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:51.348 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:51.348 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:51.348 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:51.348 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:21:51.348 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:21:51.348 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:51.348 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:51.348 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:51.348 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:51.348 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:51.348 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:21:51.348 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:51.348 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:51.348 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:51.348 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.348 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.348 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.348 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:21:51.348 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.348 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:21:51.348 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:51.348 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:51.348 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:51.348 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:51.348 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:51.348 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:51.348 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:51.348 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:51.349 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:51.349 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:51.349 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:21:51.349 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:51.349 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:51.349 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:51.349 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:51.349 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:51.349 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:51.349 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:51.349 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:51.349 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:51.349 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:51.349 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:51.349 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:51.349 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:51.349 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:51.349 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:51.349 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:51.349 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:51.349 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:51.349 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:51.349 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:51.349 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:51.349 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:51.349 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:51.349 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:51.349 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:51.349 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:51.349 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:51.349 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:51.349 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:51.349 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:51.349 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:51.349 Cannot find device "nvmf_init_br" 00:21:51.349 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:21:51.349 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:51.349 Cannot find device "nvmf_init_br2" 00:21:51.349 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:21:51.349 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:51.349 Cannot find device "nvmf_tgt_br" 00:21:51.349 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:21:51.349 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:51.349 Cannot find device "nvmf_tgt_br2" 00:21:51.349 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:21:51.349 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:51.349 Cannot find device "nvmf_init_br" 00:21:51.349 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:21:51.349 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:51.349 Cannot find device "nvmf_init_br2" 00:21:51.349 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:21:51.349 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:51.349 Cannot find device "nvmf_tgt_br" 00:21:51.349 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:21:51.349 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:51.349 Cannot find device "nvmf_tgt_br2" 00:21:51.349 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:21:51.349 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:51.349 Cannot find device "nvmf_br" 00:21:51.349 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:21:51.349 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:51.349 Cannot find device "nvmf_init_if" 00:21:51.349 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:21:51.349 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:51.349 Cannot find device "nvmf_init_if2" 00:21:51.349 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:21:51.349 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:51.349 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:51.349 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:21:51.349 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:51.349 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:51.349 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:21:51.349 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:51.349 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:51.349 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:51.349 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:51.349 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:51.349 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:51.349 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:51.349 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:51.349 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:51.349 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:51.349 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:51.349 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:51.349 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:51.349 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:51.349 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:51.349 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:51.349 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:51.349 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:51.349 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:51.607 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:51.607 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:51.607 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:51.607 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:51.607 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:51.607 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:51.607 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:51.607 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:51.607 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:51.607 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:51.607 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:51.607 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:51.607 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:51.607 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:51.607 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:51.607 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:21:51.607 00:21:51.607 --- 10.0.0.3 ping statistics --- 00:21:51.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:51.607 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:21:51.607 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:51.607 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:51.607 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.061 ms 00:21:51.607 00:21:51.607 --- 10.0.0.4 ping statistics --- 00:21:51.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:51.607 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:21:51.607 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:51.607 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:51.607 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.015 ms 00:21:51.607 00:21:51.607 --- 10.0.0.1 ping statistics --- 00:21:51.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:51.607 rtt min/avg/max/mdev = 0.015/0.015/0.015/0.000 ms 00:21:51.607 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:51.607 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:51.607 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:21:51.607 00:21:51.607 --- 10.0.0.2 ping statistics --- 00:21:51.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:51.607 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:21:51.608 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:51.608 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@461 -- # return 0 00:21:51.608 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:51.608 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:51.608 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:51.608 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:51.608 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:51.608 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:51.608 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:51.608 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:21:51.608 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:51.608 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:51.608 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:51.608 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=90791 00:21:51.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:51.608 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 90791 00:21:51.608 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:51.608 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@833 -- # '[' -z 90791 ']' 00:21:51.608 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:51.608 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:51.608 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:51.608 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:51.608 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:51.608 [2024-11-06 14:26:19.164843] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:21:51.608 [2024-11-06 14:26:19.165118] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:51.864 [2024-11-06 14:26:19.327214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:51.864 [2024-11-06 14:26:19.422713] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:51.864 [2024-11-06 14:26:19.422887] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:51.864 [2024-11-06 14:26:19.422974] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:51.864 [2024-11-06 14:26:19.423026] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:51.865 [2024-11-06 14:26:19.423053] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:51.865 [2024-11-06 14:26:19.423898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:52.430 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:52.430 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@866 -- # return 0 00:21:52.430 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:52.430 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:52.430 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:52.430 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:52.430 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:52.430 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:21:52.430 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:21:52.430 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.430 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:52.430 [2024-11-06 14:26:19.998649] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:52.430 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.430 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:21:52.430 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.430 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:52.430 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.430 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:52.430 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.430 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:52.430 Malloc0 00:21:52.430 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.430 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:52.430 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.430 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:52.430 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.430 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:21:52.430 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.431 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:52.431 [2024-11-06 14:26:20.054988] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:52.431 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.431 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=90840 00:21:52.431 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:21:52.431 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=90841 00:21:52.431 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:21:52.431 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=90842 00:21:52.431 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 90840 00:21:52.431 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:21:52.688 [2024-11-06 14:26:20.253601] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:52.688 [2024-11-06 14:26:20.264761] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:52.688 [2024-11-06 14:26:20.265002] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:54.061 Initializing NVMe Controllers 00:21:54.061 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:21:54.061 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:21:54.061 Initialization complete. Launching workers. 00:21:54.061 ======================================================== 00:21:54.061 Latency(us) 00:21:54.061 Device Information : IOPS MiB/s Average min max 00:21:54.061 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3978.00 15.54 251.09 106.44 1591.15 00:21:54.061 ======================================================== 00:21:54.061 Total : 3978.00 15.54 251.09 106.44 1591.15 00:21:54.061 00:21:54.061 Initializing NVMe Controllers 00:21:54.061 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:21:54.061 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:21:54.061 Initialization complete. Launching workers. 00:21:54.061 ======================================================== 00:21:54.061 Latency(us) 00:21:54.061 Device Information : IOPS MiB/s Average min max 00:21:54.061 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3971.00 15.51 251.51 139.36 1497.41 00:21:54.061 ======================================================== 00:21:54.061 Total : 3971.00 15.51 251.51 139.36 1497.41 00:21:54.061 00:21:54.061 Initializing NVMe Controllers 00:21:54.061 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:21:54.061 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:21:54.061 Initialization complete. Launching workers. 00:21:54.061 ======================================================== 00:21:54.061 Latency(us) 00:21:54.061 Device Information : IOPS MiB/s Average min max 00:21:54.061 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3960.00 15.47 252.25 151.26 1637.85 00:21:54.061 ======================================================== 00:21:54.061 Total : 3960.00 15.47 252.25 151.26 1637.85 00:21:54.061 00:21:54.061 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 90841 00:21:54.061 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 90842 00:21:54.061 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:54.061 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:21:54.061 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:54.061 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:21:54.061 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:54.061 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:21:54.062 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:54.062 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:54.062 rmmod nvme_tcp 00:21:54.062 rmmod nvme_fabrics 00:21:54.062 rmmod nvme_keyring 00:21:54.062 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:54.062 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:21:54.062 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:21:54.062 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 90791 ']' 00:21:54.062 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 90791 00:21:54.062 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@952 -- # '[' -z 90791 ']' 00:21:54.062 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # kill -0 90791 00:21:54.062 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # uname 00:21:54.062 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:54.062 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 90791 00:21:54.062 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:54.062 killing process with pid 90791 00:21:54.062 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:54.062 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@970 -- # echo 'killing process with pid 90791' 00:21:54.062 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@971 -- # kill 90791 00:21:54.062 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@976 -- # wait 90791 00:21:54.628 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:54.628 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:54.628 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:54.628 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:21:54.628 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:21:54.628 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:54.628 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:21:54.628 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:54.628 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:54.628 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:54.628 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:54.628 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:54.887 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:54.887 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:54.887 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:54.887 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:54.887 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:54.887 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:54.887 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:54.887 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:54.887 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:54.887 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:54.887 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:54.887 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:54.887 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:54.887 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:54.887 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:21:54.887 00:21:54.887 real 0m3.815s 00:21:54.887 user 0m5.961s 00:21:54.887 sys 0m1.220s 00:21:54.887 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:54.887 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:54.887 ************************************ 00:21:54.887 END TEST nvmf_control_msg_list 00:21:54.887 ************************************ 00:21:54.887 14:26:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:54.887 14:26:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:54.887 14:26:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:54.887 14:26:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:54.887 ************************************ 00:21:54.887 START TEST nvmf_wait_for_buf 00:21:54.887 ************************************ 00:21:54.887 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:55.147 * Looking for test storage... 00:21:55.147 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:55.147 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:55.147 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lcov --version 00:21:55.147 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:55.147 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:55.147 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:55.147 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:55.147 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:55.147 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:21:55.147 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:21:55.147 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:21:55.147 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:21:55.147 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:21:55.147 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:21:55.147 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:21:55.147 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:55.147 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:21:55.147 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:21:55.147 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:55.147 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:55.147 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:21:55.147 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:21:55.147 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:55.147 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:21:55.147 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:21:55.147 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:21:55.147 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:21:55.147 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:55.147 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:21:55.147 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:21:55.147 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:55.147 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:55.147 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:21:55.147 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:55.147 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:55.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:55.147 --rc genhtml_branch_coverage=1 00:21:55.147 --rc genhtml_function_coverage=1 00:21:55.147 --rc genhtml_legend=1 00:21:55.147 --rc geninfo_all_blocks=1 00:21:55.147 --rc geninfo_unexecuted_blocks=1 00:21:55.147 00:21:55.147 ' 00:21:55.147 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:55.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:55.147 --rc genhtml_branch_coverage=1 00:21:55.147 --rc genhtml_function_coverage=1 00:21:55.147 --rc genhtml_legend=1 00:21:55.147 --rc geninfo_all_blocks=1 00:21:55.147 --rc geninfo_unexecuted_blocks=1 00:21:55.147 00:21:55.147 ' 00:21:55.147 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:55.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:55.147 --rc genhtml_branch_coverage=1 00:21:55.147 --rc genhtml_function_coverage=1 00:21:55.147 --rc genhtml_legend=1 00:21:55.147 --rc geninfo_all_blocks=1 00:21:55.147 --rc geninfo_unexecuted_blocks=1 00:21:55.147 00:21:55.147 ' 00:21:55.147 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:55.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:55.147 --rc genhtml_branch_coverage=1 00:21:55.147 --rc genhtml_function_coverage=1 00:21:55.147 --rc genhtml_legend=1 00:21:55.147 --rc geninfo_all_blocks=1 00:21:55.147 --rc geninfo_unexecuted_blocks=1 00:21:55.147 00:21:55.147 ' 00:21:55.147 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:55.147 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:21:55.147 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:55.147 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:55.147 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:55.147 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:55.147 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:55.147 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:55.147 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:55.147 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:55.147 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:55.147 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:55.147 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:21:55.147 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:21:55.147 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:55.147 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:55.147 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:55.147 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:55.147 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:55.147 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:21:55.147 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:55.147 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:55.147 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:55.147 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.147 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.147 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.147 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:21:55.147 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.148 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:21:55.148 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:55.148 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:55.148 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:55.148 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:55.148 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:55.148 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:55.148 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:55.148 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:55.148 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:55.148 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:55.148 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:21:55.148 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:55.148 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:55.148 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:55.148 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:55.148 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:55.148 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:55.148 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:55.148 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:55.148 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:55.148 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:55.148 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:55.148 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:55.148 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:55.148 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:55.148 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:55.148 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:55.148 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:55.148 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:55.148 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:55.148 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:55.148 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:55.148 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:55.148 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:55.148 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:55.148 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:55.148 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:55.148 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:55.148 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:55.148 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:55.148 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:55.148 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:55.148 Cannot find device "nvmf_init_br" 00:21:55.148 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:21:55.148 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:55.148 Cannot find device "nvmf_init_br2" 00:21:55.148 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:21:55.148 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:55.148 Cannot find device "nvmf_tgt_br" 00:21:55.148 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:21:55.148 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:55.148 Cannot find device "nvmf_tgt_br2" 00:21:55.148 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:21:55.148 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:55.148 Cannot find device "nvmf_init_br" 00:21:55.148 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:21:55.148 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:55.148 Cannot find device "nvmf_init_br2" 00:21:55.148 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:21:55.148 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:55.148 Cannot find device "nvmf_tgt_br" 00:21:55.148 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:21:55.148 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:55.148 Cannot find device "nvmf_tgt_br2" 00:21:55.148 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:21:55.148 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:55.148 Cannot find device "nvmf_br" 00:21:55.148 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:21:55.148 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:55.148 Cannot find device "nvmf_init_if" 00:21:55.148 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:21:55.148 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:55.148 Cannot find device "nvmf_init_if2" 00:21:55.148 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:21:55.148 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:55.148 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:55.148 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:21:55.148 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:55.148 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:55.148 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:21:55.148 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:55.148 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:55.148 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:55.148 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:55.148 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:55.407 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:55.407 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:55.407 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:55.407 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:55.407 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:55.407 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:55.407 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:55.407 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:55.407 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:55.407 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:55.407 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:55.407 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:55.407 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:55.407 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:55.407 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:55.407 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:55.407 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:55.407 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:55.408 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:55.408 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:55.408 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:55.408 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:55.408 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:55.408 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:55.408 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:55.408 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:55.408 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:55.408 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:55.408 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:55.408 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:21:55.408 00:21:55.408 --- 10.0.0.3 ping statistics --- 00:21:55.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:55.408 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:21:55.408 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:55.408 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:55.408 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:21:55.408 00:21:55.408 --- 10.0.0.4 ping statistics --- 00:21:55.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:55.408 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:21:55.408 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:55.408 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:55.408 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:21:55.408 00:21:55.408 --- 10.0.0.1 ping statistics --- 00:21:55.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:55.408 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:21:55.408 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:55.408 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:55.408 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:21:55.408 00:21:55.408 --- 10.0.0.2 ping statistics --- 00:21:55.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:55.408 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:21:55.408 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:55.408 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@461 -- # return 0 00:21:55.408 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:55.408 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:55.408 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:55.408 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:55.408 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:55.408 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:55.408 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:55.408 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:21:55.408 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:55.408 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:55.408 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:55.408 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=91086 00:21:55.408 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 91086 00:21:55.408 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@833 -- # '[' -z 91086 ']' 00:21:55.408 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:55.408 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:55.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:55.408 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:55.408 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:55.408 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:55.408 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:55.408 [2024-11-06 14:26:23.017087] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:21:55.408 [2024-11-06 14:26:23.017190] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:55.666 [2024-11-06 14:26:23.176732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:55.666 [2024-11-06 14:26:23.274456] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:55.666 [2024-11-06 14:26:23.274502] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:55.667 [2024-11-06 14:26:23.274514] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:55.667 [2024-11-06 14:26:23.274529] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:55.667 [2024-11-06 14:26:23.274538] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:55.667 [2024-11-06 14:26:23.275388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:56.232 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:56.232 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@866 -- # return 0 00:21:56.232 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:56.232 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:56.232 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:56.493 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:56.493 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:56.493 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:21:56.493 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:21:56.493 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.493 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:56.493 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.493 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:21:56.493 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.493 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:56.493 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.493 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:21:56.493 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.493 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:56.493 14:26:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.493 14:26:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:56.493 14:26:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.493 14:26:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:56.493 Malloc0 00:21:56.493 14:26:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.493 14:26:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:21:56.493 14:26:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.493 14:26:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:56.752 [2024-11-06 14:26:24.129790] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:56.752 14:26:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.752 14:26:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:21:56.752 14:26:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.752 14:26:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:56.752 14:26:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.752 14:26:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:56.752 14:26:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.752 14:26:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:56.752 14:26:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.752 14:26:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:21:56.752 14:26:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.752 14:26:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:56.752 [2024-11-06 14:26:24.153880] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:56.752 14:26:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.752 14:26:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:21:56.752 [2024-11-06 14:26:24.375092] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:58.128 Initializing NVMe Controllers 00:21:58.128 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:21:58.128 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:21:58.128 Initialization complete. Launching workers. 00:21:58.128 ======================================================== 00:21:58.128 Latency(us) 00:21:58.128 Device Information : IOPS MiB/s Average min max 00:21:58.128 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 127.49 15.94 32493.31 7953.81 64012.15 00:21:58.128 ======================================================== 00:21:58.128 Total : 127.49 15.94 32493.31 7953.81 64012.15 00:21:58.128 00:21:58.128 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:21:58.128 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.128 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:58.128 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:21:58.387 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.387 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2022 00:21:58.387 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2022 -eq 0 ]] 00:21:58.387 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:58.387 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:21:58.387 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:58.387 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:21:58.387 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:58.387 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:21:58.387 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:58.387 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:58.387 rmmod nvme_tcp 00:21:58.387 rmmod nvme_fabrics 00:21:58.387 rmmod nvme_keyring 00:21:58.387 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:58.387 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:21:58.387 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:21:58.387 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 91086 ']' 00:21:58.387 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 91086 00:21:58.387 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@952 -- # '[' -z 91086 ']' 00:21:58.387 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # kill -0 91086 00:21:58.387 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # uname 00:21:58.387 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:58.387 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 91086 00:21:58.387 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:58.387 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:58.387 killing process with pid 91086 00:21:58.387 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 91086' 00:21:58.387 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@971 -- # kill 91086 00:21:58.387 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@976 -- # wait 91086 00:21:58.954 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:58.954 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:58.954 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:58.954 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:21:58.954 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:21:58.954 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:58.954 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:21:58.954 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:58.954 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:58.954 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:58.954 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:58.954 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:58.954 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:58.954 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:58.954 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:58.954 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:58.954 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:58.954 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:59.212 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:59.212 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:59.212 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:59.212 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:59.212 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:59.212 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:59.212 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:59.212 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:59.212 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:21:59.212 00:21:59.212 real 0m4.254s 00:21:59.212 user 0m3.838s 00:21:59.212 sys 0m0.676s 00:21:59.212 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:59.212 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:59.212 ************************************ 00:21:59.212 END TEST nvmf_wait_for_buf 00:21:59.212 ************************************ 00:21:59.212 14:26:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:21:59.212 14:26:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:21:59.212 14:26:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:59.212 14:26:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:59.212 14:26:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:59.213 ************************************ 00:21:59.213 START TEST nvmf_fuzz 00:21:59.213 ************************************ 00:21:59.213 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:21:59.213 * Looking for test storage... 00:21:59.213 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:59.213 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:59.213 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:21:59.213 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:59.493 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:59.493 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:59.493 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:59.493 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:59.493 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:21:59.493 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:21:59.493 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:21:59.493 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:21:59.493 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:21:59.493 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:21:59.493 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:21:59.493 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:59.493 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:21:59.493 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:21:59.493 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:59.493 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:59.493 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:21:59.493 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:21:59.493 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:59.493 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:21:59.493 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:21:59.493 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:21:59.493 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:21:59.493 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:59.493 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:21:59.493 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:21:59.493 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:59.493 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:59.493 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:21:59.493 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:59.493 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:59.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.493 --rc genhtml_branch_coverage=1 00:21:59.493 --rc genhtml_function_coverage=1 00:21:59.493 --rc genhtml_legend=1 00:21:59.493 --rc geninfo_all_blocks=1 00:21:59.493 --rc geninfo_unexecuted_blocks=1 00:21:59.493 00:21:59.493 ' 00:21:59.493 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:59.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.493 --rc genhtml_branch_coverage=1 00:21:59.493 --rc genhtml_function_coverage=1 00:21:59.493 --rc genhtml_legend=1 00:21:59.493 --rc geninfo_all_blocks=1 00:21:59.493 --rc geninfo_unexecuted_blocks=1 00:21:59.493 00:21:59.493 ' 00:21:59.493 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:59.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.493 --rc genhtml_branch_coverage=1 00:21:59.493 --rc genhtml_function_coverage=1 00:21:59.493 --rc genhtml_legend=1 00:21:59.493 --rc geninfo_all_blocks=1 00:21:59.493 --rc geninfo_unexecuted_blocks=1 00:21:59.493 00:21:59.493 ' 00:21:59.493 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:59.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.493 --rc genhtml_branch_coverage=1 00:21:59.493 --rc genhtml_function_coverage=1 00:21:59.493 --rc genhtml_legend=1 00:21:59.493 --rc geninfo_all_blocks=1 00:21:59.493 --rc geninfo_unexecuted_blocks=1 00:21:59.493 00:21:59.493 ' 00:21:59.493 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:59.493 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:21:59.493 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:59.493 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:59.493 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:59.493 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:59.493 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:59.493 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:59.493 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:59.493 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:59.493 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:59.493 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:59.493 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:21:59.494 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:21:59.494 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:59.494 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:59.494 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:59.494 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:59.494 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:59.494 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:21:59.494 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:59.494 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:59.494 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:59.494 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.494 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.494 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.494 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:21:59.494 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.494 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:21:59.494 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:59.494 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:59.494 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:59.494 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:59.494 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:59.494 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:59.494 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:59.494 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:59.494 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:59.494 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:59.494 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:21:59.494 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:59.494 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:59.494 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:59.494 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:59.494 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:59.494 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:59.494 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:59.494 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:59.494 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:59.494 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:59.494 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:59.494 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:59.494 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:59.494 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:59.494 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:59.494 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:59.494 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:59.494 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:59.494 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:59.494 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:59.494 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:59.494 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:59.494 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:59.494 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:59.494 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:59.494 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:59.494 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:59.494 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:59.494 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:59.494 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:59.494 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:59.494 Cannot find device "nvmf_init_br" 00:21:59.494 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@162 -- # true 00:21:59.494 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:59.494 Cannot find device "nvmf_init_br2" 00:21:59.494 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@163 -- # true 00:21:59.494 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:59.494 Cannot find device "nvmf_tgt_br" 00:21:59.494 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@164 -- # true 00:21:59.494 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:59.494 Cannot find device "nvmf_tgt_br2" 00:21:59.494 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@165 -- # true 00:21:59.494 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:59.494 Cannot find device "nvmf_init_br" 00:21:59.494 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@166 -- # true 00:21:59.494 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:59.494 Cannot find device "nvmf_init_br2" 00:21:59.494 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@167 -- # true 00:21:59.494 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:59.494 Cannot find device "nvmf_tgt_br" 00:21:59.494 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@168 -- # true 00:21:59.494 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:59.494 Cannot find device "nvmf_tgt_br2" 00:21:59.494 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@169 -- # true 00:21:59.494 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:59.494 Cannot find device "nvmf_br" 00:21:59.494 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@170 -- # true 00:21:59.494 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:59.494 Cannot find device "nvmf_init_if" 00:21:59.494 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@171 -- # true 00:21:59.494 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:59.494 Cannot find device "nvmf_init_if2" 00:21:59.494 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@172 -- # true 00:21:59.494 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:59.494 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:59.494 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@173 -- # true 00:21:59.494 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:59.494 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:59.494 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@174 -- # true 00:21:59.494 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:59.494 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:59.494 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:59.495 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:59.495 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:59.495 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:59.495 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:59.495 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:59.495 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:59.495 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:59.495 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:59.495 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:59.495 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:59.495 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:59.495 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:59.495 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:59.495 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:59.495 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:59.753 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:59.753 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:59.753 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:59.753 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:59.753 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:59.753 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:59.753 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:59.753 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:59.753 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:59.753 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:59.753 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:59.754 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:59.754 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:59.754 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:59.754 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:59.754 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:59.754 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:21:59.754 00:21:59.754 --- 10.0.0.3 ping statistics --- 00:21:59.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.754 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:21:59.754 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:59.754 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:59.754 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:21:59.754 00:21:59.754 --- 10.0.0.4 ping statistics --- 00:21:59.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.754 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:21:59.754 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:59.754 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:59.754 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:21:59.754 00:21:59.754 --- 10.0.0.1 ping statistics --- 00:21:59.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.754 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:21:59.754 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:59.754 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:59.754 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.031 ms 00:21:59.754 00:21:59.754 --- 10.0.0.2 ping statistics --- 00:21:59.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.754 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:21:59.754 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:59.754 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@461 -- # return 0 00:21:59.754 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:59.754 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:59.754 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:59.754 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:59.754 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:59.754 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:59.754 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:59.754 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=91374 00:21:59.754 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:21:59.754 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 91374 00:21:59.754 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@833 -- # '[' -z 91374 ']' 00:21:59.754 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:59.754 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:59.754 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:59.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:59.754 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:59.754 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:59.754 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:00.688 14:26:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:00.688 14:26:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@866 -- # return 0 00:22:00.688 14:26:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:00.688 14:26:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.688 14:26:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:00.688 14:26:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.688 14:26:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:22:00.688 14:26:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.688 14:26:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:00.688 Malloc0 00:22:00.688 14:26:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.688 14:26:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:00.688 14:26:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.688 14:26:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:00.688 14:26:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.688 14:26:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:00.688 14:26:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.688 14:26:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:00.688 14:26:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.688 14:26:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:00.688 14:26:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.688 14:26:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:00.688 14:26:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.688 14:26:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' 00:22:00.689 14:26:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' -N -a 00:22:01.254 Shutting down the fuzz application 00:22:01.254 14:26:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:22:01.513 Shutting down the fuzz application 00:22:01.513 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:01.513 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.513 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:01.513 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.513 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:22:01.513 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:22:01.513 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:01.513 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:22:01.771 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:01.771 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:22:01.771 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:01.771 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:01.771 rmmod nvme_tcp 00:22:01.771 rmmod nvme_fabrics 00:22:01.771 rmmod nvme_keyring 00:22:01.771 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:01.771 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:22:01.771 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:22:01.771 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@517 -- # '[' -n 91374 ']' 00:22:01.771 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # killprocess 91374 00:22:01.771 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@952 -- # '[' -z 91374 ']' 00:22:01.771 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # kill -0 91374 00:22:01.771 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@957 -- # uname 00:22:01.771 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:01.771 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 91374 00:22:01.771 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:01.771 killing process with pid 91374 00:22:01.771 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:01.771 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@970 -- # echo 'killing process with pid 91374' 00:22:01.771 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@971 -- # kill 91374 00:22:01.771 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@976 -- # wait 91374 00:22:02.336 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:02.336 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:02.336 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:02.336 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:22:02.336 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:02.336 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-save 00:22:02.336 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-restore 00:22:02.336 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:02.336 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:02.336 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:02.336 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:02.336 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:02.336 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:02.336 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:02.336 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:02.336 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:02.336 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:02.336 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:02.595 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:02.595 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:02.595 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:02.595 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:02.595 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:02.595 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:02.595 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:02.595 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:02.596 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@300 -- # return 0 00:22:02.596 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:22:02.596 00:22:02.596 real 0m3.348s 00:22:02.596 user 0m3.486s 00:22:02.596 sys 0m0.670s 00:22:02.596 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:02.596 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:02.596 ************************************ 00:22:02.596 END TEST nvmf_fuzz 00:22:02.596 ************************************ 00:22:02.596 14:26:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:22:02.596 14:26:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:02.596 14:26:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:02.596 14:26:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:02.596 ************************************ 00:22:02.596 START TEST nvmf_multiconnection 00:22:02.596 ************************************ 00:22:02.596 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:22:02.596 * Looking for test storage... 00:22:02.596 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:02.596 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:02.596 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1691 -- # lcov --version 00:22:02.596 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:02.870 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:02.870 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:02.870 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:02.870 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:02.870 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:22:02.870 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:22:02.870 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:22:02.870 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:22:02.870 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:22:02.870 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:22:02.870 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:22:02.870 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:02.870 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:22:02.870 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:22:02.870 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:02.870 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:02.870 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:22:02.870 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:22:02.870 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:02.870 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:22:02.870 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:22:02.870 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:22:02.870 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:22:02.870 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:02.870 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:22:02.870 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:22:02.870 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:02.870 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:02.870 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:22:02.870 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:02.870 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:02.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.870 --rc genhtml_branch_coverage=1 00:22:02.870 --rc genhtml_function_coverage=1 00:22:02.870 --rc genhtml_legend=1 00:22:02.870 --rc geninfo_all_blocks=1 00:22:02.870 --rc geninfo_unexecuted_blocks=1 00:22:02.870 00:22:02.870 ' 00:22:02.870 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:02.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.870 --rc genhtml_branch_coverage=1 00:22:02.870 --rc genhtml_function_coverage=1 00:22:02.870 --rc genhtml_legend=1 00:22:02.870 --rc geninfo_all_blocks=1 00:22:02.870 --rc geninfo_unexecuted_blocks=1 00:22:02.870 00:22:02.870 ' 00:22:02.870 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:02.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.870 --rc genhtml_branch_coverage=1 00:22:02.870 --rc genhtml_function_coverage=1 00:22:02.870 --rc genhtml_legend=1 00:22:02.870 --rc geninfo_all_blocks=1 00:22:02.870 --rc geninfo_unexecuted_blocks=1 00:22:02.870 00:22:02.870 ' 00:22:02.870 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:02.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.870 --rc genhtml_branch_coverage=1 00:22:02.870 --rc genhtml_function_coverage=1 00:22:02.870 --rc genhtml_legend=1 00:22:02.870 --rc geninfo_all_blocks=1 00:22:02.870 --rc geninfo_unexecuted_blocks=1 00:22:02.870 00:22:02.870 ' 00:22:02.870 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:02.870 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:22:02.870 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:02.870 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:02.870 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:02.870 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:02.870 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:02.870 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:02.870 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:02.870 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:02.870 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:02.870 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:02.870 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:22:02.870 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:22:02.870 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:02.870 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:02.870 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:02.870 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:02.870 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:02.870 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:22:02.870 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:02.870 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:02.870 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:02.870 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.870 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.870 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.870 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:22:02.870 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.870 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:22:02.870 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:02.870 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:02.870 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:02.870 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:02.870 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:02.870 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:02.870 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:02.870 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:02.870 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:02.870 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:02.870 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:02.871 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:02.871 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:22:02.871 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:22:02.871 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:02.871 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:02.871 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:02.871 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:02.871 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:02.871 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:02.871 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:02.871 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:02.871 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:22:02.871 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:22:02.871 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:22:02.871 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:22:02.871 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:22:02.871 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@460 -- # nvmf_veth_init 00:22:02.871 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:02.871 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:02.871 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:02.871 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:02.871 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:02.871 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:02.871 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:02.871 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:02.871 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:02.871 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:02.871 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:02.871 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:02.871 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:02.871 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:02.871 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:02.871 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:02.871 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:02.871 Cannot find device "nvmf_init_br" 00:22:02.871 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@162 -- # true 00:22:02.871 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:02.871 Cannot find device "nvmf_init_br2" 00:22:02.871 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@163 -- # true 00:22:02.871 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:02.871 Cannot find device "nvmf_tgt_br" 00:22:02.871 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@164 -- # true 00:22:02.871 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:02.871 Cannot find device "nvmf_tgt_br2" 00:22:02.871 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@165 -- # true 00:22:02.871 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:02.871 Cannot find device "nvmf_init_br" 00:22:02.871 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@166 -- # true 00:22:02.871 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:02.871 Cannot find device "nvmf_init_br2" 00:22:02.871 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@167 -- # true 00:22:02.871 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:02.871 Cannot find device "nvmf_tgt_br" 00:22:02.871 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@168 -- # true 00:22:02.871 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:02.871 Cannot find device "nvmf_tgt_br2" 00:22:02.871 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@169 -- # true 00:22:02.871 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:02.871 Cannot find device "nvmf_br" 00:22:02.871 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@170 -- # true 00:22:02.871 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:02.871 Cannot find device "nvmf_init_if" 00:22:02.871 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@171 -- # true 00:22:02.871 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:02.871 Cannot find device "nvmf_init_if2" 00:22:02.871 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@172 -- # true 00:22:02.871 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:02.871 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:02.871 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@173 -- # true 00:22:02.871 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:02.871 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:02.871 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@174 -- # true 00:22:02.871 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:02.871 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:02.871 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:02.871 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:02.871 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:02.871 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:02.871 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:02.871 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:02.871 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:02.871 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:02.871 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:03.130 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:03.130 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:03.130 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:03.130 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:03.130 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:03.130 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:03.130 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:03.130 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:03.130 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:03.130 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:03.130 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:03.130 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:03.130 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:03.130 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:03.130 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:03.130 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:03.130 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:03.130 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:03.130 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:03.130 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:03.130 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:03.130 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:03.130 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:03.130 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:22:03.130 00:22:03.130 --- 10.0.0.3 ping statistics --- 00:22:03.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:03.130 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:22:03.130 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:03.130 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:03.130 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:22:03.130 00:22:03.130 --- 10.0.0.4 ping statistics --- 00:22:03.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:03.130 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:22:03.130 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:03.130 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:03.130 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:22:03.130 00:22:03.130 --- 10.0.0.1 ping statistics --- 00:22:03.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:03.130 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:22:03.130 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:03.130 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:03.130 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.036 ms 00:22:03.130 00:22:03.130 --- 10.0.0.2 ping statistics --- 00:22:03.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:03.130 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:22:03.130 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:03.130 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@461 -- # return 0 00:22:03.130 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:03.130 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:03.130 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:03.130 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:03.130 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:03.130 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:03.130 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:03.130 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:22:03.130 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:03.130 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:03.130 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:03.130 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # nvmfpid=91643 00:22:03.130 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # waitforlisten 91643 00:22:03.130 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@833 -- # '[' -z 91643 ']' 00:22:03.130 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:03.130 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:03.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:03.130 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:03.130 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:03.130 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:03.130 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:03.130 [2024-11-06 14:26:30.680465] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:22:03.130 [2024-11-06 14:26:30.680577] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:03.389 [2024-11-06 14:26:30.840218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:03.389 [2024-11-06 14:26:30.941379] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:03.389 [2024-11-06 14:26:30.941426] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:03.389 [2024-11-06 14:26:30.941437] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:03.389 [2024-11-06 14:26:30.941445] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:03.389 [2024-11-06 14:26:30.941454] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:03.389 [2024-11-06 14:26:30.943017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:03.389 [2024-11-06 14:26:30.943085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:03.389 [2024-11-06 14:26:30.943470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:03.389 [2024-11-06 14:26:30.943495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:03.955 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:03.955 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@866 -- # return 0 00:22:03.955 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:03.955 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:03.955 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:03.955 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:03.955 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:03.955 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.955 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:03.955 [2024-11-06 14:26:31.548958] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:03.955 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.955 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:22:03.955 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:03.955 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:03.955 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.955 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:04.214 Malloc1 00:22:04.214 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.214 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:22:04.214 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.214 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:04.214 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.214 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:04.214 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.214 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:04.214 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.214 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:04.214 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.214 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:04.214 [2024-11-06 14:26:31.659200] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:04.214 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.214 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:04.214 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:22:04.214 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.214 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:04.214 Malloc2 00:22:04.214 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.214 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:22:04.214 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.214 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:04.214 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.214 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:22:04.214 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.214 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:04.214 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.214 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:22:04.214 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.214 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:04.214 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.214 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:04.214 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:22:04.214 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.214 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:04.214 Malloc3 00:22:04.214 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.214 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:22:04.214 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.214 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:04.214 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.214 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:22:04.214 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.214 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:04.214 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.214 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:22:04.214 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.214 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:04.214 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.214 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:04.214 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:22:04.214 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.214 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:04.472 Malloc4 00:22:04.472 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.472 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:22:04.472 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.472 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:04.472 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.472 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:22:04.472 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.472 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:04.472 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.472 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.3 -s 4420 00:22:04.472 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.472 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:04.472 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.472 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:04.472 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:22:04.472 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.472 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:04.472 Malloc5 00:22:04.472 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.472 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:22:04.472 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.472 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:04.472 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.472 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:22:04.472 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.472 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:04.472 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.472 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.3 -s 4420 00:22:04.472 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.472 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:04.472 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.472 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:04.472 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:22:04.472 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.473 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:04.473 Malloc6 00:22:04.473 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.473 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:22:04.473 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.473 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:04.473 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.473 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:22:04.473 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.473 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:04.473 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.473 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.3 -s 4420 00:22:04.473 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.473 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:04.473 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.473 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:04.473 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:22:04.473 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.473 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:04.473 Malloc7 00:22:04.473 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.473 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:22:04.473 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.473 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:04.731 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.731 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:22:04.731 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.731 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:04.731 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.731 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.3 -s 4420 00:22:04.731 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.731 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:04.731 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.731 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:04.731 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:22:04.731 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.731 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:04.731 Malloc8 00:22:04.731 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.731 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:22:04.731 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.731 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:04.731 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.731 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:22:04.731 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.731 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:04.731 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.731 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.3 -s 4420 00:22:04.731 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.731 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:04.731 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.731 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:04.731 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:22:04.731 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.731 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:04.731 Malloc9 00:22:04.731 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.731 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:22:04.731 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.731 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:04.731 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.731 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:22:04.731 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.731 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:04.731 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.731 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.3 -s 4420 00:22:04.731 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.731 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:04.731 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.731 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:04.731 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:22:04.731 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.732 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:04.732 Malloc10 00:22:04.732 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.732 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:22:04.732 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.732 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:04.732 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.732 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:22:04.732 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.732 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:04.990 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.990 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.3 -s 4420 00:22:04.990 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.990 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:04.990 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.990 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:04.990 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:22:04.990 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.990 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:04.990 Malloc11 00:22:04.990 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.990 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:22:04.990 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.990 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:04.990 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.990 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:22:04.990 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.990 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:04.990 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.990 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.3 -s 4420 00:22:04.990 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.990 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:04.990 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.990 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:22:04.990 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:04.990 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid=9ca3beaa-4765-4780-96a0-c04fd97e7435 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:22:05.247 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:22:05.247 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:22:05.247 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:22:05.247 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:22:05.247 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:22:07.187 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:22:07.187 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:22:07.187 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK1 00:22:07.187 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:22:07.187 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:22:07.187 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:22:07.187 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:07.187 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid=9ca3beaa-4765-4780-96a0-c04fd97e7435 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.3 -s 4420 00:22:07.187 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:22:07.187 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:22:07.187 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:22:07.445 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:22:07.445 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:22:09.344 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:22:09.344 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:22:09.344 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK2 00:22:09.344 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:22:09.344 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:22:09.344 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:22:09.344 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:09.344 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid=9ca3beaa-4765-4780-96a0-c04fd97e7435 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.3 -s 4420 00:22:09.602 14:26:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:22:09.602 14:26:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:22:09.602 14:26:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:22:09.602 14:26:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:22:09.602 14:26:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:22:11.499 14:26:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:22:11.499 14:26:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:22:11.500 14:26:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK3 00:22:11.500 14:26:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:22:11.500 14:26:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:22:11.500 14:26:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:22:11.500 14:26:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:11.500 14:26:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid=9ca3beaa-4765-4780-96a0-c04fd97e7435 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.3 -s 4420 00:22:11.756 14:26:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:22:11.756 14:26:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:22:11.756 14:26:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:22:11.756 14:26:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:22:11.756 14:26:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:22:13.690 14:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:22:13.690 14:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:22:13.690 14:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK4 00:22:13.690 14:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:22:13.690 14:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:22:13.690 14:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:22:13.690 14:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:13.690 14:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid=9ca3beaa-4765-4780-96a0-c04fd97e7435 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.3 -s 4420 00:22:13.948 14:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:22:13.948 14:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:22:13.948 14:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:22:13.948 14:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:22:13.948 14:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:22:15.846 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:22:15.846 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:22:15.846 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK5 00:22:15.846 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:22:15.846 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:22:15.846 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:22:15.846 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:15.846 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid=9ca3beaa-4765-4780-96a0-c04fd97e7435 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.3 -s 4420 00:22:16.103 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:22:16.103 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:22:16.103 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:22:16.103 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:22:16.103 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:22:18.002 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:22:18.002 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK6 00:22:18.002 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:22:18.002 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:22:18.002 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:22:18.002 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:22:18.002 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:18.002 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid=9ca3beaa-4765-4780-96a0-c04fd97e7435 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.3 -s 4420 00:22:18.260 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:22:18.260 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:22:18.260 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:22:18.260 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:22:18.260 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:22:20.195 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:22:20.195 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:22:20.195 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK7 00:22:20.195 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:22:20.195 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:22:20.195 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:22:20.195 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:20.195 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid=9ca3beaa-4765-4780-96a0-c04fd97e7435 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.3 -s 4420 00:22:20.453 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:22:20.453 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:22:20.453 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:22:20.453 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:22:20.453 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:22:22.353 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:22:22.353 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:22:22.353 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK8 00:22:22.353 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:22:22.353 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:22:22.353 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:22:22.353 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:22.353 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid=9ca3beaa-4765-4780-96a0-c04fd97e7435 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.3 -s 4420 00:22:22.611 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:22:22.611 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:22:22.611 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:22:22.611 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:22:22.611 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:22:24.548 14:26:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:22:24.548 14:26:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK9 00:22:24.548 14:26:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:22:24.548 14:26:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:22:24.548 14:26:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:22:24.548 14:26:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:22:24.548 14:26:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:24.548 14:26:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid=9ca3beaa-4765-4780-96a0-c04fd97e7435 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.3 -s 4420 00:22:24.806 14:26:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:22:24.806 14:26:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:22:24.806 14:26:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:22:24.806 14:26:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:22:24.806 14:26:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:22:27.334 14:26:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:22:27.334 14:26:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:22:27.334 14:26:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK10 00:22:27.334 14:26:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:22:27.334 14:26:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:22:27.334 14:26:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:22:27.334 14:26:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:27.334 14:26:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid=9ca3beaa-4765-4780-96a0-c04fd97e7435 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.3 -s 4420 00:22:27.334 14:26:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:22:27.334 14:26:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:22:27.334 14:26:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:22:27.334 14:26:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:22:27.334 14:26:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:22:29.232 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:22:29.232 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:22:29.232 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK11 00:22:29.232 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:22:29.232 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:22:29.232 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:22:29.232 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:22:29.232 [global] 00:22:29.232 thread=1 00:22:29.232 invalidate=1 00:22:29.232 rw=read 00:22:29.232 time_based=1 00:22:29.232 runtime=10 00:22:29.232 ioengine=libaio 00:22:29.232 direct=1 00:22:29.232 bs=262144 00:22:29.232 iodepth=64 00:22:29.232 norandommap=1 00:22:29.232 numjobs=1 00:22:29.232 00:22:29.232 [job0] 00:22:29.232 filename=/dev/nvme0n1 00:22:29.232 [job1] 00:22:29.232 filename=/dev/nvme10n1 00:22:29.232 [job2] 00:22:29.232 filename=/dev/nvme1n1 00:22:29.232 [job3] 00:22:29.232 filename=/dev/nvme2n1 00:22:29.232 [job4] 00:22:29.232 filename=/dev/nvme3n1 00:22:29.232 [job5] 00:22:29.232 filename=/dev/nvme4n1 00:22:29.232 [job6] 00:22:29.232 filename=/dev/nvme5n1 00:22:29.232 [job7] 00:22:29.232 filename=/dev/nvme6n1 00:22:29.232 [job8] 00:22:29.232 filename=/dev/nvme7n1 00:22:29.232 [job9] 00:22:29.232 filename=/dev/nvme8n1 00:22:29.232 [job10] 00:22:29.232 filename=/dev/nvme9n1 00:22:29.232 Could not set queue depth (nvme0n1) 00:22:29.232 Could not set queue depth (nvme10n1) 00:22:29.232 Could not set queue depth (nvme1n1) 00:22:29.232 Could not set queue depth (nvme2n1) 00:22:29.232 Could not set queue depth (nvme3n1) 00:22:29.232 Could not set queue depth (nvme4n1) 00:22:29.232 Could not set queue depth (nvme5n1) 00:22:29.232 Could not set queue depth (nvme6n1) 00:22:29.232 Could not set queue depth (nvme7n1) 00:22:29.232 Could not set queue depth (nvme8n1) 00:22:29.232 Could not set queue depth (nvme9n1) 00:22:29.232 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:29.232 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:29.232 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:29.232 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:29.232 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:29.232 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:29.232 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:29.232 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:29.232 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:29.232 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:29.232 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:29.232 fio-3.35 00:22:29.232 Starting 11 threads 00:22:41.432 00:22:41.432 job0: (groupid=0, jobs=1): err= 0: pid=92131: Wed Nov 6 14:27:07 2024 00:22:41.432 read: IOPS=233, BW=58.4MiB/s (61.2MB/s)(591MiB/10125msec) 00:22:41.432 slat (usec): min=12, max=197128, avg=4204.25, stdev=16644.30 00:22:41.432 clat (msec): min=11, max=472, avg=269.43, stdev=76.60 00:22:41.432 lat (msec): min=12, max=497, avg=273.63, stdev=78.91 00:22:41.432 clat percentiles (msec): 00:22:41.432 | 1.00th=[ 24], 5.00th=[ 114], 10.00th=[ 148], 20.00th=[ 243], 00:22:41.432 | 30.00th=[ 259], 40.00th=[ 271], 50.00th=[ 284], 60.00th=[ 292], 00:22:41.432 | 70.00th=[ 305], 80.00th=[ 321], 90.00th=[ 347], 95.00th=[ 372], 00:22:41.432 | 99.00th=[ 409], 99.50th=[ 426], 99.90th=[ 472], 99.95th=[ 472], 00:22:41.432 | 99.99th=[ 472] 00:22:41.432 bw ( KiB/s): min=43008, max=113891, per=6.57%, avg=58905.40, stdev=15248.19, samples=20 00:22:41.432 iops : min= 168, max= 444, avg=230.00, stdev=59.40, samples=20 00:22:41.432 lat (msec) : 20=0.59%, 50=1.44%, 100=0.51%, 250=21.23%, 500=76.24% 00:22:41.432 cpu : usr=0.06%, sys=0.76%, ctx=473, majf=0, minf=4097 00:22:41.432 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.4%, >=64=97.3% 00:22:41.432 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:41.432 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:41.432 issued rwts: total=2365,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:41.433 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:41.433 job1: (groupid=0, jobs=1): err= 0: pid=92132: Wed Nov 6 14:27:07 2024 00:22:41.433 read: IOPS=105, BW=26.5MiB/s (27.8MB/s)(269MiB/10165msec) 00:22:41.433 slat (usec): min=11, max=434978, avg=9084.19, stdev=41361.24 00:22:41.433 clat (msec): min=14, max=954, avg=594.28, stdev=155.72 00:22:41.433 lat (msec): min=14, max=1053, avg=603.36, stdev=161.68 00:22:41.433 clat percentiles (msec): 00:22:41.433 | 1.00th=[ 36], 5.00th=[ 169], 10.00th=[ 426], 20.00th=[ 550], 00:22:41.433 | 30.00th=[ 558], 40.00th=[ 592], 50.00th=[ 617], 60.00th=[ 651], 00:22:41.433 | 70.00th=[ 676], 80.00th=[ 709], 90.00th=[ 743], 95.00th=[ 785], 00:22:41.433 | 99.00th=[ 827], 99.50th=[ 869], 99.90th=[ 936], 99.95th=[ 953], 00:22:41.433 | 99.99th=[ 953] 00:22:41.433 bw ( KiB/s): min=11776, max=50176, per=2.89%, avg=25928.50, stdev=8520.13, samples=20 00:22:41.433 iops : min= 46, max= 196, avg=101.20, stdev=33.33, samples=20 00:22:41.433 lat (msec) : 20=0.37%, 50=1.30%, 250=4.64%, 500=12.53%, 750=72.98% 00:22:41.433 lat (msec) : 1000=8.17% 00:22:41.433 cpu : usr=0.02%, sys=0.38%, ctx=157, majf=0, minf=4097 00:22:41.433 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.5%, 32=3.0%, >=64=94.2% 00:22:41.433 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:41.433 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:41.433 issued rwts: total=1077,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:41.433 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:41.433 job2: (groupid=0, jobs=1): err= 0: pid=92133: Wed Nov 6 14:27:07 2024 00:22:41.433 read: IOPS=119, BW=29.8MiB/s (31.2MB/s)(303MiB/10166msec) 00:22:41.433 slat (usec): min=11, max=303486, avg=8316.72, stdev=35473.92 00:22:41.433 clat (msec): min=17, max=836, avg=528.60, stdev=147.25 00:22:41.433 lat (msec): min=17, max=893, avg=536.92, stdev=152.33 00:22:41.433 clat percentiles (msec): 00:22:41.433 | 1.00th=[ 37], 5.00th=[ 249], 10.00th=[ 313], 20.00th=[ 426], 00:22:41.433 | 30.00th=[ 481], 40.00th=[ 535], 50.00th=[ 567], 60.00th=[ 600], 00:22:41.433 | 70.00th=[ 617], 80.00th=[ 651], 90.00th=[ 676], 95.00th=[ 709], 00:22:41.433 | 99.00th=[ 760], 99.50th=[ 760], 99.90th=[ 835], 99.95th=[ 835], 00:22:41.433 | 99.99th=[ 835] 00:22:41.433 bw ( KiB/s): min=15360, max=51712, per=3.27%, avg=29332.10, stdev=8738.89, samples=20 00:22:41.433 iops : min= 60, max= 202, avg=114.50, stdev=34.16, samples=20 00:22:41.433 lat (msec) : 20=0.33%, 50=1.74%, 250=3.88%, 500=30.00%, 750=62.98% 00:22:41.433 lat (msec) : 1000=1.07% 00:22:41.433 cpu : usr=0.02%, sys=0.33%, ctx=162, majf=0, minf=4097 00:22:41.433 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.3%, 32=2.6%, >=64=94.8% 00:22:41.433 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:41.433 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:41.433 issued rwts: total=1210,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:41.433 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:41.433 job3: (groupid=0, jobs=1): err= 0: pid=92134: Wed Nov 6 14:27:07 2024 00:22:41.433 read: IOPS=232, BW=58.1MiB/s (60.9MB/s)(588MiB/10123msec) 00:22:41.433 slat (usec): min=13, max=440023, avg=4265.39, stdev=19462.60 00:22:41.433 clat (msec): min=34, max=741, avg=270.88, stdev=73.51 00:22:41.433 lat (msec): min=34, max=985, avg=275.14, stdev=76.05 00:22:41.433 clat percentiles (msec): 00:22:41.433 | 1.00th=[ 125], 5.00th=[ 209], 10.00th=[ 220], 20.00th=[ 228], 00:22:41.433 | 30.00th=[ 236], 40.00th=[ 247], 50.00th=[ 257], 60.00th=[ 266], 00:22:41.433 | 70.00th=[ 279], 80.00th=[ 296], 90.00th=[ 330], 95.00th=[ 472], 00:22:41.433 | 99.00th=[ 542], 99.50th=[ 609], 99.90th=[ 718], 99.95th=[ 718], 00:22:41.433 | 99.99th=[ 743] 00:22:41.433 bw ( KiB/s): min= 9234, max=74752, per=6.54%, avg=58566.60, stdev=15950.05, samples=20 00:22:41.433 iops : min= 36, max= 292, avg=228.75, stdev=62.30, samples=20 00:22:41.433 lat (msec) : 50=0.21%, 100=0.04%, 250=44.94%, 500=50.98%, 750=3.83% 00:22:41.433 cpu : usr=0.07%, sys=0.81%, ctx=526, majf=0, minf=4097 00:22:41.433 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.4%, >=64=97.3% 00:22:41.433 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:41.433 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:41.433 issued rwts: total=2352,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:41.433 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:41.433 job4: (groupid=0, jobs=1): err= 0: pid=92135: Wed Nov 6 14:27:07 2024 00:22:41.433 read: IOPS=119, BW=29.9MiB/s (31.3MB/s)(304MiB/10165msec) 00:22:41.433 slat (usec): min=12, max=379196, avg=8343.76, stdev=36522.54 00:22:41.433 clat (msec): min=12, max=852, avg=526.53, stdev=154.92 00:22:41.433 lat (msec): min=12, max=997, avg=534.87, stdev=159.92 00:22:41.433 clat percentiles (msec): 00:22:41.433 | 1.00th=[ 36], 5.00th=[ 165], 10.00th=[ 334], 20.00th=[ 439], 00:22:41.433 | 30.00th=[ 498], 40.00th=[ 531], 50.00th=[ 558], 60.00th=[ 584], 00:22:41.433 | 70.00th=[ 617], 80.00th=[ 642], 90.00th=[ 667], 95.00th=[ 726], 00:22:41.433 | 99.00th=[ 827], 99.50th=[ 827], 99.90th=[ 852], 99.95th=[ 852], 00:22:41.433 | 99.99th=[ 852] 00:22:41.433 bw ( KiB/s): min=18432, max=46592, per=3.29%, avg=29460.30, stdev=6543.62, samples=20 00:22:41.433 iops : min= 72, max= 182, avg=115.00, stdev=25.61, samples=20 00:22:41.433 lat (msec) : 20=0.58%, 50=1.81%, 250=5.84%, 500=23.13%, 750=66.26% 00:22:41.433 lat (msec) : 1000=2.39% 00:22:41.433 cpu : usr=0.03%, sys=0.40%, ctx=130, majf=0, minf=4097 00:22:41.433 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.3%, 32=2.6%, >=64=94.8% 00:22:41.433 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:41.433 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:41.433 issued rwts: total=1215,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:41.433 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:41.433 job5: (groupid=0, jobs=1): err= 0: pid=92136: Wed Nov 6 14:27:07 2024 00:22:41.433 read: IOPS=222, BW=55.7MiB/s (58.4MB/s)(567MiB/10167msec) 00:22:41.433 slat (usec): min=10, max=413453, avg=4413.19, stdev=17873.62 00:22:41.433 clat (msec): min=18, max=743, avg=282.31, stdev=76.35 00:22:41.433 lat (msec): min=18, max=814, avg=286.72, stdev=78.27 00:22:41.433 clat percentiles (msec): 00:22:41.433 | 1.00th=[ 190], 5.00th=[ 222], 10.00th=[ 236], 20.00th=[ 245], 00:22:41.433 | 30.00th=[ 251], 40.00th=[ 259], 50.00th=[ 266], 60.00th=[ 275], 00:22:41.433 | 70.00th=[ 288], 80.00th=[ 305], 90.00th=[ 334], 95.00th=[ 372], 00:22:41.433 | 99.00th=[ 676], 99.50th=[ 684], 99.90th=[ 718], 99.95th=[ 718], 00:22:41.433 | 99.99th=[ 743] 00:22:41.433 bw ( KiB/s): min=25088, max=70144, per=6.29%, avg=56385.45, stdev=11001.96, samples=20 00:22:41.433 iops : min= 98, max= 274, avg=220.20, stdev=42.98, samples=20 00:22:41.433 lat (msec) : 20=0.18%, 50=0.31%, 250=29.57%, 500=67.17%, 750=2.78% 00:22:41.433 cpu : usr=0.04%, sys=0.68%, ctx=451, majf=0, minf=4097 00:22:41.433 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.4%, >=64=97.2% 00:22:41.433 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:41.433 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:41.433 issued rwts: total=2266,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:41.433 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:41.433 job6: (groupid=0, jobs=1): err= 0: pid=92137: Wed Nov 6 14:27:07 2024 00:22:41.433 read: IOPS=198, BW=49.6MiB/s (52.0MB/s)(502MiB/10119msec) 00:22:41.433 slat (usec): min=9, max=579474, avg=4916.39, stdev=21431.22 00:22:41.433 clat (msec): min=33, max=746, avg=317.50, stdev=69.98 00:22:41.433 lat (msec): min=34, max=780, avg=322.41, stdev=71.64 00:22:41.433 clat percentiles (msec): 00:22:41.433 | 1.00th=[ 232], 5.00th=[ 245], 10.00th=[ 262], 20.00th=[ 275], 00:22:41.433 | 30.00th=[ 284], 40.00th=[ 292], 50.00th=[ 305], 60.00th=[ 317], 00:22:41.433 | 70.00th=[ 330], 80.00th=[ 347], 90.00th=[ 384], 95.00th=[ 409], 00:22:41.433 | 99.00th=[ 609], 99.50th=[ 617], 99.90th=[ 718], 99.95th=[ 743], 00:22:41.433 | 99.99th=[ 743] 00:22:41.433 bw ( KiB/s): min= 2560, max=64000, per=5.55%, avg=49735.20, stdev=12718.04, samples=20 00:22:41.433 iops : min= 10, max= 250, avg=194.25, stdev=49.67, samples=20 00:22:41.433 lat (msec) : 50=0.25%, 100=0.05%, 250=4.93%, 500=91.63%, 750=3.14% 00:22:41.433 cpu : usr=0.02%, sys=0.62%, ctx=432, majf=0, minf=4097 00:22:41.433 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.9% 00:22:41.433 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:41.433 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:41.433 issued rwts: total=2007,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:41.433 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:41.433 job7: (groupid=0, jobs=1): err= 0: pid=92138: Wed Nov 6 14:27:07 2024 00:22:41.433 read: IOPS=1741, BW=435MiB/s (456MB/s)(4360MiB/10016msec) 00:22:41.433 slat (usec): min=11, max=19744, avg=571.53, stdev=2676.16 00:22:41.433 clat (usec): min=9167, max=72872, avg=36135.03, stdev=7173.94 00:22:41.433 lat (usec): min=9317, max=80030, avg=36706.56, stdev=7726.00 00:22:41.433 clat percentiles (usec): 00:22:41.433 | 1.00th=[27919], 5.00th=[29492], 10.00th=[30278], 20.00th=[31589], 00:22:41.433 | 30.00th=[32375], 40.00th=[33162], 50.00th=[33817], 60.00th=[34341], 00:22:41.433 | 70.00th=[36439], 80.00th=[40633], 90.00th=[45351], 95.00th=[54264], 00:22:41.433 | 99.00th=[60031], 99.50th=[62129], 99.90th=[64226], 99.95th=[68682], 00:22:41.433 | 99.99th=[72877] 00:22:41.433 bw ( KiB/s): min=277504, max=524800, per=49.64%, avg=444801.25, stdev=73518.12, samples=20 00:22:41.433 iops : min= 1084, max= 2050, avg=1737.50, stdev=287.18, samples=20 00:22:41.433 lat (msec) : 10=0.03%, 20=0.23%, 50=93.47%, 100=6.27% 00:22:41.433 cpu : usr=0.25%, sys=3.94%, ctx=1716, majf=0, minf=4097 00:22:41.433 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:22:41.433 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:41.433 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:41.433 issued rwts: total=17440,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:41.433 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:41.433 job8: (groupid=0, jobs=1): err= 0: pid=92139: Wed Nov 6 14:27:07 2024 00:22:41.433 read: IOPS=221, BW=55.4MiB/s (58.1MB/s)(562MiB/10128msec) 00:22:41.433 slat (usec): min=12, max=381823, avg=4450.42, stdev=19150.28 00:22:41.433 clat (msec): min=18, max=916, avg=283.75, stdev=83.57 00:22:41.433 lat (msec): min=19, max=916, avg=288.20, stdev=85.72 00:22:41.433 clat percentiles (msec): 00:22:41.433 | 1.00th=[ 35], 5.00th=[ 218], 10.00th=[ 226], 20.00th=[ 241], 00:22:41.434 | 30.00th=[ 253], 40.00th=[ 262], 50.00th=[ 271], 60.00th=[ 284], 00:22:41.434 | 70.00th=[ 296], 80.00th=[ 313], 90.00th=[ 347], 95.00th=[ 368], 00:22:41.434 | 99.00th=[ 718], 99.50th=[ 718], 99.90th=[ 743], 99.95th=[ 743], 00:22:41.434 | 99.99th=[ 919] 00:22:41.434 bw ( KiB/s): min=31744, max=67449, per=6.23%, avg=55872.50, stdev=9464.97, samples=20 00:22:41.434 iops : min= 124, max= 263, avg=218.20, stdev=36.94, samples=20 00:22:41.434 lat (msec) : 20=0.04%, 50=1.47%, 250=26.45%, 500=69.23%, 750=2.76% 00:22:41.434 lat (msec) : 1000=0.04% 00:22:41.434 cpu : usr=0.03%, sys=0.74%, ctx=450, majf=0, minf=4097 00:22:41.434 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.4%, >=64=97.2% 00:22:41.434 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:41.434 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:41.434 issued rwts: total=2246,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:41.434 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:41.434 job9: (groupid=0, jobs=1): err= 0: pid=92140: Wed Nov 6 14:27:07 2024 00:22:41.434 read: IOPS=113, BW=28.5MiB/s (29.8MB/s)(289MiB/10162msec) 00:22:41.434 slat (usec): min=13, max=312053, avg=8693.19, stdev=36155.09 00:22:41.434 clat (msec): min=28, max=884, avg=552.71, stdev=171.09 00:22:41.434 lat (msec): min=28, max=1012, avg=561.41, stdev=175.70 00:22:41.434 clat percentiles (msec): 00:22:41.434 | 1.00th=[ 32], 5.00th=[ 245], 10.00th=[ 330], 20.00th=[ 418], 00:22:41.434 | 30.00th=[ 514], 40.00th=[ 550], 50.00th=[ 592], 60.00th=[ 617], 00:22:41.434 | 70.00th=[ 642], 80.00th=[ 676], 90.00th=[ 726], 95.00th=[ 776], 00:22:41.434 | 99.00th=[ 827], 99.50th=[ 827], 99.90th=[ 860], 99.95th=[ 885], 00:22:41.434 | 99.99th=[ 885] 00:22:41.434 bw ( KiB/s): min=10752, max=55296, per=3.12%, avg=27977.55, stdev=8649.49, samples=20 00:22:41.434 iops : min= 42, max= 216, avg=109.25, stdev=33.77, samples=20 00:22:41.434 lat (msec) : 50=3.20%, 100=0.09%, 250=2.16%, 500=23.25%, 750=63.70% 00:22:41.434 lat (msec) : 1000=7.61% 00:22:41.434 cpu : usr=0.03%, sys=0.41%, ctx=160, majf=0, minf=4097 00:22:41.434 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.4%, 32=2.8%, >=64=94.6% 00:22:41.434 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:41.434 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:41.434 issued rwts: total=1157,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:41.434 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:41.434 job10: (groupid=0, jobs=1): err= 0: pid=92141: Wed Nov 6 14:27:07 2024 00:22:41.434 read: IOPS=222, BW=55.7MiB/s (58.4MB/s)(564MiB/10121msec) 00:22:41.434 slat (usec): min=11, max=185136, avg=4438.76, stdev=17652.09 00:22:41.434 clat (msec): min=11, max=492, avg=282.48, stdev=80.92 00:22:41.434 lat (msec): min=11, max=531, avg=286.92, stdev=83.42 00:22:41.434 clat percentiles (msec): 00:22:41.434 | 1.00th=[ 48], 5.00th=[ 116], 10.00th=[ 155], 20.00th=[ 251], 00:22:41.434 | 30.00th=[ 268], 40.00th=[ 284], 50.00th=[ 292], 60.00th=[ 305], 00:22:41.434 | 70.00th=[ 326], 80.00th=[ 347], 90.00th=[ 368], 95.00th=[ 393], 00:22:41.434 | 99.00th=[ 443], 99.50th=[ 443], 99.90th=[ 451], 99.95th=[ 456], 00:22:41.434 | 99.99th=[ 493] 00:22:41.434 bw ( KiB/s): min=33792, max=119022, per=6.26%, avg=56095.85, stdev=16307.55, samples=20 00:22:41.434 iops : min= 132, max= 464, avg=219.05, stdev=63.51, samples=20 00:22:41.434 lat (msec) : 20=0.98%, 50=0.22%, 100=1.64%, 250=16.98%, 500=80.18% 00:22:41.434 cpu : usr=0.05%, sys=0.68%, ctx=544, majf=0, minf=4097 00:22:41.434 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.4%, >=64=97.2% 00:22:41.434 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:41.434 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:41.434 issued rwts: total=2255,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:41.434 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:41.434 00:22:41.434 Run status group 0 (all jobs): 00:22:41.434 READ: bw=875MiB/s (918MB/s), 26.5MiB/s-435MiB/s (27.8MB/s-456MB/s), io=8898MiB (9330MB), run=10016-10167msec 00:22:41.434 00:22:41.434 Disk stats (read/write): 00:22:41.434 nvme0n1: ios=4628/0, merge=0/0, ticks=1246667/0, in_queue=1246667, util=98.48% 00:22:41.434 nvme10n1: ios=2026/0, merge=0/0, ticks=1210110/0, in_queue=1210110, util=98.59% 00:22:41.434 nvme1n1: ios=2328/0, merge=0/0, ticks=1243158/0, in_queue=1243158, util=98.63% 00:22:41.434 nvme2n1: ios=4601/0, merge=0/0, ticks=1249756/0, in_queue=1249756, util=98.62% 00:22:41.434 nvme3n1: ios=2303/0, merge=0/0, ticks=1237600/0, in_queue=1237600, util=98.75% 00:22:41.434 nvme4n1: ios=4482/0, merge=0/0, ticks=1255732/0, in_queue=1255732, util=98.78% 00:22:41.434 nvme5n1: ios=3950/0, merge=0/0, ticks=1250877/0, in_queue=1250877, util=98.82% 00:22:41.434 nvme6n1: ios=34127/0, merge=0/0, ticks=1212477/0, in_queue=1212477, util=98.85% 00:22:41.434 nvme7n1: ios=4419/0, merge=0/0, ticks=1249778/0, in_queue=1249778, util=99.01% 00:22:41.434 nvme8n1: ios=2220/0, merge=0/0, ticks=1239660/0, in_queue=1239660, util=99.03% 00:22:41.434 nvme9n1: ios=4448/0, merge=0/0, ticks=1250895/0, in_queue=1250895, util=99.09% 00:22:41.434 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:22:41.434 [global] 00:22:41.434 thread=1 00:22:41.434 invalidate=1 00:22:41.434 rw=randwrite 00:22:41.434 time_based=1 00:22:41.434 runtime=10 00:22:41.434 ioengine=libaio 00:22:41.434 direct=1 00:22:41.434 bs=262144 00:22:41.434 iodepth=64 00:22:41.434 norandommap=1 00:22:41.434 numjobs=1 00:22:41.434 00:22:41.434 [job0] 00:22:41.434 filename=/dev/nvme0n1 00:22:41.434 [job1] 00:22:41.434 filename=/dev/nvme10n1 00:22:41.434 [job2] 00:22:41.434 filename=/dev/nvme1n1 00:22:41.434 [job3] 00:22:41.434 filename=/dev/nvme2n1 00:22:41.434 [job4] 00:22:41.434 filename=/dev/nvme3n1 00:22:41.434 [job5] 00:22:41.434 filename=/dev/nvme4n1 00:22:41.434 [job6] 00:22:41.434 filename=/dev/nvme5n1 00:22:41.434 [job7] 00:22:41.434 filename=/dev/nvme6n1 00:22:41.434 [job8] 00:22:41.434 filename=/dev/nvme7n1 00:22:41.434 [job9] 00:22:41.434 filename=/dev/nvme8n1 00:22:41.434 [job10] 00:22:41.434 filename=/dev/nvme9n1 00:22:41.434 Could not set queue depth (nvme0n1) 00:22:41.434 Could not set queue depth (nvme10n1) 00:22:41.434 Could not set queue depth (nvme1n1) 00:22:41.434 Could not set queue depth (nvme2n1) 00:22:41.434 Could not set queue depth (nvme3n1) 00:22:41.434 Could not set queue depth (nvme4n1) 00:22:41.434 Could not set queue depth (nvme5n1) 00:22:41.434 Could not set queue depth (nvme6n1) 00:22:41.434 Could not set queue depth (nvme7n1) 00:22:41.434 Could not set queue depth (nvme8n1) 00:22:41.434 Could not set queue depth (nvme9n1) 00:22:41.434 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:41.434 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:41.434 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:41.434 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:41.434 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:41.434 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:41.434 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:41.434 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:41.434 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:41.434 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:41.434 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:41.434 fio-3.35 00:22:41.434 Starting 11 threads 00:22:51.406 00:22:51.406 job0: (groupid=0, jobs=1): err= 0: pid=92342: Wed Nov 6 14:27:17 2024 00:22:51.406 write: IOPS=460, BW=115MiB/s (121MB/s)(1162MiB/10093msec); 0 zone resets 00:22:51.406 slat (usec): min=18, max=31058, avg=2151.16, stdev=3751.47 00:22:51.406 clat (msec): min=14, max=245, avg=136.84, stdev=16.16 00:22:51.406 lat (msec): min=14, max=255, avg=138.99, stdev=15.97 00:22:51.406 clat percentiles (msec): 00:22:51.406 | 1.00th=[ 112], 5.00th=[ 127], 10.00th=[ 128], 20.00th=[ 131], 00:22:51.406 | 30.00th=[ 134], 40.00th=[ 136], 50.00th=[ 136], 60.00th=[ 138], 00:22:51.406 | 70.00th=[ 138], 80.00th=[ 140], 90.00th=[ 140], 95.00th=[ 142], 00:22:51.406 | 99.00th=[ 224], 99.50th=[ 232], 99.90th=[ 245], 99.95th=[ 247], 00:22:51.406 | 99.99th=[ 247] 00:22:51.406 bw ( KiB/s): min=74752, max=122880, per=13.05%, avg=117312.90, stdev=10169.77, samples=20 00:22:51.406 iops : min= 292, max= 480, avg=458.25, stdev=39.73, samples=20 00:22:51.406 lat (msec) : 20=0.04%, 50=0.17%, 100=0.54%, 250=99.25% 00:22:51.406 cpu : usr=0.75%, sys=0.81%, ctx=2798, majf=0, minf=1 00:22:51.406 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:22:51.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.406 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:51.406 issued rwts: total=0,4646,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:51.406 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:51.406 job1: (groupid=0, jobs=1): err= 0: pid=92343: Wed Nov 6 14:27:17 2024 00:22:51.406 write: IOPS=290, BW=72.5MiB/s (76.1MB/s)(735MiB/10134msec); 0 zone resets 00:22:51.406 slat (usec): min=17, max=47970, avg=3316.23, stdev=5976.03 00:22:51.406 clat (msec): min=18, max=348, avg=217.20, stdev=21.55 00:22:51.406 lat (msec): min=19, max=348, avg=220.52, stdev=21.24 00:22:51.406 clat percentiles (msec): 00:22:51.406 | 1.00th=[ 117], 5.00th=[ 197], 10.00th=[ 205], 20.00th=[ 211], 00:22:51.406 | 30.00th=[ 215], 40.00th=[ 218], 50.00th=[ 222], 60.00th=[ 224], 00:22:51.406 | 70.00th=[ 226], 80.00th=[ 228], 90.00th=[ 230], 95.00th=[ 232], 00:22:51.406 | 99.00th=[ 268], 99.50th=[ 296], 99.90th=[ 334], 99.95th=[ 351], 00:22:51.406 | 99.99th=[ 351] 00:22:51.406 bw ( KiB/s): min=67584, max=81920, per=8.19%, avg=73644.00, stdev=2766.67, samples=20 00:22:51.406 iops : min= 264, max= 320, avg=287.65, stdev=10.83, samples=20 00:22:51.406 lat (msec) : 20=0.10%, 100=0.48%, 250=97.99%, 500=1.43% 00:22:51.406 cpu : usr=0.57%, sys=0.51%, ctx=3218, majf=0, minf=1 00:22:51.406 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:22:51.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.406 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:51.406 issued rwts: total=0,2940,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:51.406 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:51.406 job2: (groupid=0, jobs=1): err= 0: pid=92344: Wed Nov 6 14:27:17 2024 00:22:51.406 write: IOPS=250, BW=62.7MiB/s (65.7MB/s)(639MiB/10187msec); 0 zone resets 00:22:51.406 slat (usec): min=14, max=41670, avg=3920.70, stdev=7014.52 00:22:51.406 clat (usec): min=1709, max=470536, avg=251244.51, stdev=48745.11 00:22:51.407 lat (usec): min=1774, max=470574, avg=255165.21, stdev=49026.97 00:22:51.407 clat percentiles (msec): 00:22:51.407 | 1.00th=[ 57], 5.00th=[ 201], 10.00th=[ 220], 20.00th=[ 228], 00:22:51.407 | 30.00th=[ 234], 40.00th=[ 236], 50.00th=[ 243], 60.00th=[ 247], 00:22:51.407 | 70.00th=[ 271], 80.00th=[ 296], 90.00th=[ 309], 95.00th=[ 317], 00:22:51.407 | 99.00th=[ 359], 99.50th=[ 418], 99.90th=[ 451], 99.95th=[ 472], 00:22:51.407 | 99.99th=[ 472] 00:22:51.407 bw ( KiB/s): min=51200, max=82432, per=7.09%, avg=63744.00, stdev=8474.29, samples=20 00:22:51.407 iops : min= 200, max= 322, avg=249.00, stdev=33.10, samples=20 00:22:51.407 lat (msec) : 2=0.04%, 10=0.31%, 20=0.16%, 50=0.47%, 100=0.78% 00:22:51.407 lat (msec) : 250=62.14%, 500=36.10% 00:22:51.407 cpu : usr=0.54%, sys=0.39%, ctx=1708, majf=0, minf=1 00:22:51.407 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:22:51.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.407 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:51.407 issued rwts: total=0,2554,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:51.407 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:51.407 job3: (groupid=0, jobs=1): err= 0: pid=92354: Wed Nov 6 14:27:17 2024 00:22:51.407 write: IOPS=288, BW=72.2MiB/s (75.7MB/s)(732MiB/10136msec); 0 zone resets 00:22:51.407 slat (usec): min=17, max=55329, avg=3414.33, stdev=6032.09 00:22:51.407 clat (msec): min=22, max=351, avg=218.05, stdev=22.93 00:22:51.407 lat (msec): min=22, max=351, avg=221.47, stdev=22.54 00:22:51.407 clat percentiles (msec): 00:22:51.407 | 1.00th=[ 109], 5.00th=[ 201], 10.00th=[ 205], 20.00th=[ 211], 00:22:51.407 | 30.00th=[ 215], 40.00th=[ 218], 50.00th=[ 220], 60.00th=[ 222], 00:22:51.407 | 70.00th=[ 226], 80.00th=[ 228], 90.00th=[ 230], 95.00th=[ 239], 00:22:51.407 | 99.00th=[ 275], 99.50th=[ 296], 99.90th=[ 338], 99.95th=[ 351], 00:22:51.407 | 99.99th=[ 351] 00:22:51.407 bw ( KiB/s): min=69632, max=75776, per=8.16%, avg=73336.65, stdev=1993.28, samples=20 00:22:51.407 iops : min= 272, max= 296, avg=286.45, stdev= 7.79, samples=20 00:22:51.407 lat (msec) : 50=0.41%, 100=0.55%, 250=95.80%, 500=3.24% 00:22:51.407 cpu : usr=0.55%, sys=0.51%, ctx=1660, majf=0, minf=1 00:22:51.407 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.8% 00:22:51.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.407 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:51.407 issued rwts: total=0,2928,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:51.407 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:51.407 job4: (groupid=0, jobs=1): err= 0: pid=92357: Wed Nov 6 14:27:17 2024 00:22:51.407 write: IOPS=248, BW=62.0MiB/s (65.0MB/s)(631MiB/10170msec); 0 zone resets 00:22:51.407 slat (usec): min=17, max=60226, avg=3965.36, stdev=7121.40 00:22:51.407 clat (msec): min=61, max=463, avg=253.91, stdev=41.57 00:22:51.407 lat (msec): min=61, max=463, avg=257.88, stdev=41.65 00:22:51.407 clat percentiles (msec): 00:22:51.407 | 1.00th=[ 122], 5.00th=[ 213], 10.00th=[ 222], 20.00th=[ 228], 00:22:51.407 | 30.00th=[ 234], 40.00th=[ 236], 50.00th=[ 243], 60.00th=[ 247], 00:22:51.407 | 70.00th=[ 275], 80.00th=[ 300], 90.00th=[ 309], 95.00th=[ 317], 00:22:51.407 | 99.00th=[ 355], 99.50th=[ 409], 99.90th=[ 443], 99.95th=[ 464], 00:22:51.407 | 99.99th=[ 464] 00:22:51.407 bw ( KiB/s): min=51200, max=71680, per=7.01%, avg=62976.00, stdev=7069.15, samples=20 00:22:51.407 iops : min= 200, max= 280, avg=246.00, stdev=27.61, samples=20 00:22:51.407 lat (msec) : 100=0.59%, 250=63.81%, 500=35.59% 00:22:51.407 cpu : usr=0.36%, sys=0.48%, ctx=2839, majf=0, minf=1 00:22:51.407 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:22:51.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.407 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:51.407 issued rwts: total=0,2523,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:51.407 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:51.407 job5: (groupid=0, jobs=1): err= 0: pid=92358: Wed Nov 6 14:27:17 2024 00:22:51.407 write: IOPS=455, BW=114MiB/s (119MB/s)(1150MiB/10092msec); 0 zone resets 00:22:51.407 slat (usec): min=17, max=116173, avg=2172.85, stdev=4135.64 00:22:51.407 clat (msec): min=85, max=353, avg=138.20, stdev=20.29 00:22:51.407 lat (msec): min=93, max=353, avg=140.38, stdev=20.13 00:22:51.407 clat percentiles (msec): 00:22:51.407 | 1.00th=[ 125], 5.00th=[ 127], 10.00th=[ 128], 20.00th=[ 131], 00:22:51.407 | 30.00th=[ 134], 40.00th=[ 136], 50.00th=[ 136], 60.00th=[ 138], 00:22:51.407 | 70.00th=[ 138], 80.00th=[ 140], 90.00th=[ 140], 95.00th=[ 142], 00:22:51.407 | 99.00th=[ 247], 99.50th=[ 271], 99.90th=[ 342], 99.95th=[ 355], 00:22:51.407 | 99.99th=[ 355] 00:22:51.407 bw ( KiB/s): min=49053, max=122880, per=12.92%, avg=116130.35, stdev=15872.69, samples=20 00:22:51.407 iops : min= 191, max= 480, avg=453.60, stdev=62.14, samples=20 00:22:51.407 lat (msec) : 100=0.11%, 250=99.15%, 500=0.74% 00:22:51.407 cpu : usr=0.75%, sys=0.87%, ctx=8387, majf=0, minf=1 00:22:51.407 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:22:51.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.407 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:51.407 issued rwts: total=0,4600,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:51.407 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:51.407 job6: (groupid=0, jobs=1): err= 0: pid=92359: Wed Nov 6 14:27:17 2024 00:22:51.407 write: IOPS=285, BW=71.3MiB/s (74.7MB/s)(722MiB/10127msec); 0 zone resets 00:22:51.407 slat (usec): min=17, max=131742, avg=3463.14, stdev=6476.45 00:22:51.407 clat (msec): min=121, max=343, avg=220.89, stdev=16.98 00:22:51.407 lat (msec): min=135, max=343, avg=224.35, stdev=16.06 00:22:51.407 clat percentiles (msec): 00:22:51.407 | 1.00th=[ 178], 5.00th=[ 203], 10.00th=[ 207], 20.00th=[ 211], 00:22:51.407 | 30.00th=[ 215], 40.00th=[ 218], 50.00th=[ 222], 60.00th=[ 224], 00:22:51.407 | 70.00th=[ 226], 80.00th=[ 228], 90.00th=[ 232], 95.00th=[ 249], 00:22:51.407 | 99.00th=[ 292], 99.50th=[ 309], 99.90th=[ 330], 99.95th=[ 342], 00:22:51.407 | 99.99th=[ 342] 00:22:51.407 bw ( KiB/s): min=52736, max=75776, per=8.04%, avg=72294.40, stdev=5023.70, samples=20 00:22:51.407 iops : min= 206, max= 296, avg=282.40, stdev=19.62, samples=20 00:22:51.407 lat (msec) : 250=95.29%, 500=4.71% 00:22:51.407 cpu : usr=0.38%, sys=0.59%, ctx=3014, majf=0, minf=1 00:22:51.407 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:22:51.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.407 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:51.407 issued rwts: total=0,2887,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:51.407 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:51.407 job7: (groupid=0, jobs=1): err= 0: pid=92360: Wed Nov 6 14:27:17 2024 00:22:51.407 write: IOPS=459, BW=115MiB/s (120MB/s)(1159MiB/10093msec); 0 zone resets 00:22:51.407 slat (usec): min=12, max=60183, avg=2155.29, stdev=3813.66 00:22:51.407 clat (msec): min=21, max=271, avg=137.14, stdev=18.10 00:22:51.407 lat (msec): min=21, max=271, avg=139.29, stdev=17.97 00:22:51.407 clat percentiles (msec): 00:22:51.407 | 1.00th=[ 117], 5.00th=[ 127], 10.00th=[ 128], 20.00th=[ 131], 00:22:51.407 | 30.00th=[ 134], 40.00th=[ 136], 50.00th=[ 136], 60.00th=[ 138], 00:22:51.407 | 70.00th=[ 138], 80.00th=[ 140], 90.00th=[ 140], 95.00th=[ 142], 00:22:51.407 | 99.00th=[ 245], 99.50th=[ 259], 99.90th=[ 271], 99.95th=[ 271], 00:22:51.407 | 99.99th=[ 271] 00:22:51.407 bw ( KiB/s): min=69771, max=122880, per=13.02%, avg=117064.05, stdev=11270.81, samples=20 00:22:51.407 iops : min= 272, max= 480, avg=457.25, stdev=44.15, samples=20 00:22:51.407 lat (msec) : 50=0.26%, 100=0.45%, 250=98.47%, 500=0.82% 00:22:51.407 cpu : usr=0.63%, sys=0.89%, ctx=5510, majf=0, minf=1 00:22:51.407 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:22:51.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.407 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:51.407 issued rwts: total=0,4636,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:51.407 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:51.407 job8: (groupid=0, jobs=1): err= 0: pid=92361: Wed Nov 6 14:27:17 2024 00:22:51.407 write: IOPS=250, BW=62.6MiB/s (65.6MB/s)(637MiB/10180msec); 0 zone resets 00:22:51.407 slat (usec): min=18, max=106910, avg=3841.07, stdev=7197.11 00:22:51.407 clat (msec): min=74, max=468, avg=251.74, stdev=41.06 00:22:51.407 lat (msec): min=79, max=468, avg=255.58, stdev=41.21 00:22:51.407 clat percentiles (msec): 00:22:51.407 | 1.00th=[ 111], 5.00th=[ 211], 10.00th=[ 220], 20.00th=[ 228], 00:22:51.407 | 30.00th=[ 232], 40.00th=[ 236], 50.00th=[ 243], 60.00th=[ 247], 00:22:51.407 | 70.00th=[ 264], 80.00th=[ 296], 90.00th=[ 305], 95.00th=[ 313], 00:22:51.407 | 99.00th=[ 359], 99.50th=[ 414], 99.90th=[ 447], 99.95th=[ 468], 00:22:51.407 | 99.99th=[ 468] 00:22:51.407 bw ( KiB/s): min=51097, max=73875, per=7.08%, avg=63618.20, stdev=7485.28, samples=20 00:22:51.407 iops : min= 199, max= 288, avg=248.45, stdev=29.25, samples=20 00:22:51.407 lat (msec) : 100=0.63%, 250=65.19%, 500=34.18% 00:22:51.407 cpu : usr=0.40%, sys=0.48%, ctx=3087, majf=0, minf=1 00:22:51.407 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:22:51.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.407 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:51.407 issued rwts: total=0,2548,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:51.407 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:51.407 job9: (groupid=0, jobs=1): err= 0: pid=92362: Wed Nov 6 14:27:17 2024 00:22:51.407 write: IOPS=291, BW=72.9MiB/s (76.5MB/s)(739MiB/10132msec); 0 zone resets 00:22:51.407 slat (usec): min=14, max=22760, avg=3380.65, stdev=5911.95 00:22:51.407 clat (msec): min=21, max=352, avg=215.91, stdev=23.05 00:22:51.407 lat (msec): min=21, max=352, avg=219.29, stdev=22.70 00:22:51.407 clat percentiles (msec): 00:22:51.407 | 1.00th=[ 107], 5.00th=[ 199], 10.00th=[ 203], 20.00th=[ 209], 00:22:51.407 | 30.00th=[ 213], 40.00th=[ 215], 50.00th=[ 218], 60.00th=[ 222], 00:22:51.407 | 70.00th=[ 224], 80.00th=[ 226], 90.00th=[ 230], 95.00th=[ 232], 00:22:51.407 | 99.00th=[ 275], 99.50th=[ 296], 99.90th=[ 338], 99.95th=[ 351], 00:22:51.407 | 99.99th=[ 351] 00:22:51.407 bw ( KiB/s): min=67584, max=77824, per=8.24%, avg=74053.60, stdev=2884.95, samples=20 00:22:51.407 iops : min= 264, max= 304, avg=289.25, stdev=11.29, samples=20 00:22:51.407 lat (msec) : 50=0.41%, 100=0.54%, 250=96.41%, 500=2.64% 00:22:51.407 cpu : usr=0.47%, sys=0.65%, ctx=3502, majf=0, minf=1 00:22:51.407 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:22:51.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.408 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:51.408 issued rwts: total=0,2956,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:51.408 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:51.408 job10: (groupid=0, jobs=1): err= 0: pid=92363: Wed Nov 6 14:27:17 2024 00:22:51.408 write: IOPS=250, BW=62.7MiB/s (65.7MB/s)(638MiB/10180msec); 0 zone resets 00:22:51.408 slat (usec): min=16, max=28598, avg=3918.33, stdev=6970.12 00:22:51.408 clat (msec): min=11, max=473, avg=251.18, stdev=45.58 00:22:51.408 lat (msec): min=11, max=473, avg=255.10, stdev=45.78 00:22:51.408 clat percentiles (msec): 00:22:51.408 | 1.00th=[ 71], 5.00th=[ 209], 10.00th=[ 220], 20.00th=[ 228], 00:22:51.408 | 30.00th=[ 234], 40.00th=[ 236], 50.00th=[ 243], 60.00th=[ 247], 00:22:51.408 | 70.00th=[ 271], 80.00th=[ 296], 90.00th=[ 305], 95.00th=[ 313], 00:22:51.408 | 99.00th=[ 363], 99.50th=[ 418], 99.90th=[ 456], 99.95th=[ 472], 00:22:51.408 | 99.99th=[ 472] 00:22:51.408 bw ( KiB/s): min=51097, max=75927, per=7.09%, avg=63746.40, stdev=7411.07, samples=20 00:22:51.408 iops : min= 199, max= 296, avg=248.95, stdev=28.95, samples=20 00:22:51.408 lat (msec) : 20=0.16%, 50=0.47%, 100=0.78%, 250=63.14%, 500=35.45% 00:22:51.408 cpu : usr=0.43%, sys=0.54%, ctx=3304, majf=0, minf=1 00:22:51.408 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:22:51.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.408 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:51.408 issued rwts: total=0,2553,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:51.408 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:51.408 00:22:51.408 Run status group 0 (all jobs): 00:22:51.408 WRITE: bw=878MiB/s (921MB/s), 62.0MiB/s-115MiB/s (65.0MB/s-121MB/s), io=8943MiB (9377MB), run=10092-10187msec 00:22:51.408 00:22:51.408 Disk stats (read/write): 00:22:51.408 nvme0n1: ios=50/9255, merge=0/0, ticks=25/1227325, in_queue=1227350, util=98.59% 00:22:51.408 nvme10n1: ios=49/5802, merge=0/0, ticks=28/1222315, in_queue=1222343, util=98.74% 00:22:51.408 nvme1n1: ios=49/5018, merge=0/0, ticks=55/1217184, in_queue=1217239, util=98.90% 00:22:51.408 nvme2n1: ios=48/5782, merge=0/0, ticks=24/1221385, in_queue=1221409, util=98.79% 00:22:51.408 nvme3n1: ios=44/4952, merge=0/0, ticks=28/1214868, in_queue=1214896, util=98.68% 00:22:51.408 nvme4n1: ios=13/9161, merge=0/0, ticks=4/1226823, in_queue=1226827, util=98.74% 00:22:51.408 nvme5n1: ios=13/5694, merge=0/0, ticks=10/1219773, in_queue=1219783, util=98.68% 00:22:51.408 nvme6n1: ios=15/9239, merge=0/0, ticks=11/1227142, in_queue=1227153, util=98.83% 00:22:51.408 nvme7n1: ios=0/5004, merge=0/0, ticks=0/1217541, in_queue=1217541, util=98.88% 00:22:51.408 nvme8n1: ios=0/5837, merge=0/0, ticks=0/1220969, in_queue=1220969, util=98.92% 00:22:51.408 nvme9n1: ios=0/5016, merge=0/0, ticks=0/1216315, in_queue=1216315, util=98.97% 00:22:51.408 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:22:51.408 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:22:51.408 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:51.408 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:22:51.408 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:51.408 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:22:51.408 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:22:51.408 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:22:51.408 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK1 00:22:51.408 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:22:51.408 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK1 00:22:51.408 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:22:51.408 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:51.408 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.408 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:51.408 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.408 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:51.408 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:22:51.408 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:22:51.408 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:22:51.408 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:22:51.408 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:22:51.408 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK2 00:22:51.408 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK2 00:22:51.408 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:22:51.408 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:22:51.408 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:51.408 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.408 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:51.408 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.408 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:51.408 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:22:51.408 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:22:51.408 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:22:51.408 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:22:51.408 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:22:51.408 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK3 00:22:51.408 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK3 00:22:51.408 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:22:51.408 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:22:51.408 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:22:51.408 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.408 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:51.408 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.408 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:51.408 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:22:51.408 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:22:51.408 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:22:51.408 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:22:51.408 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:22:51.408 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK4 00:22:51.408 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:22:51.408 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK4 00:22:51.408 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:22:51.408 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:22:51.408 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.408 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:51.408 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.408 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:51.408 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:22:51.408 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:22:51.408 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:22:51.408 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:22:51.408 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:22:51.408 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK5 00:22:51.408 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK5 00:22:51.408 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:22:51.408 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:22:51.408 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:22:51.408 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.408 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:51.408 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.408 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:51.408 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:22:51.408 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:22:51.408 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:22:51.408 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:22:51.408 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK6 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK6 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:22:51.409 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK7 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK7 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:22:51.409 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK8 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK8 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:22:51.409 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK9 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK9 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:22:51.409 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK10 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK10 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:22:51.409 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK11 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK11 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:51.409 rmmod nvme_tcp 00:22:51.409 rmmod nvme_fabrics 00:22:51.409 rmmod nvme_keyring 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:22:51.409 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@517 -- # '[' -n 91643 ']' 00:22:51.410 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # killprocess 91643 00:22:51.410 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@952 -- # '[' -z 91643 ']' 00:22:51.410 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # kill -0 91643 00:22:51.410 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@957 -- # uname 00:22:51.410 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:51.410 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 91643 00:22:51.410 killing process with pid 91643 00:22:51.410 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:51.410 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:51.410 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@970 -- # echo 'killing process with pid 91643' 00:22:51.410 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@971 -- # kill 91643 00:22:51.410 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@976 -- # wait 91643 00:22:52.833 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:52.833 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:52.833 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:52.833 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:22:52.833 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-save 00:22:52.833 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:52.833 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-restore 00:22:52.833 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:52.833 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:52.833 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:52.833 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:52.833 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:53.092 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:53.092 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:53.092 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:53.092 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:53.092 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:53.092 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:53.092 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:53.092 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:53.092 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:53.092 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:53.092 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:53.092 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:53.092 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:53.092 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:53.092 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@300 -- # return 0 00:22:53.092 00:22:53.092 real 0m50.502s 00:22:53.092 user 3m8.984s 00:22:53.092 sys 0m9.309s 00:22:53.092 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:53.092 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:53.092 ************************************ 00:22:53.092 END TEST nvmf_multiconnection 00:22:53.092 ************************************ 00:22:53.092 14:27:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:22:53.092 14:27:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:53.092 14:27:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:53.092 14:27:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:53.092 ************************************ 00:22:53.092 START TEST nvmf_initiator_timeout 00:22:53.092 ************************************ 00:22:53.092 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:22:53.352 * Looking for test storage... 00:22:53.352 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:53.352 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:53.352 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:53.352 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1691 -- # lcov --version 00:22:53.352 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:53.352 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:53.352 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:53.352 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:53.352 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:22:53.352 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:22:53.352 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:22:53.352 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:22:53.352 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:22:53.352 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:22:53.352 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:22:53.352 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:53.352 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:22:53.352 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:22:53.352 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:53.352 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:53.352 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:22:53.352 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:22:53.352 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:53.352 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:22:53.352 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:22:53.352 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:22:53.352 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:22:53.352 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:53.352 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:22:53.352 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:22:53.352 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:53.352 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:53.352 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:22:53.352 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:53.352 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:53.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:53.352 --rc genhtml_branch_coverage=1 00:22:53.352 --rc genhtml_function_coverage=1 00:22:53.352 --rc genhtml_legend=1 00:22:53.352 --rc geninfo_all_blocks=1 00:22:53.352 --rc geninfo_unexecuted_blocks=1 00:22:53.352 00:22:53.352 ' 00:22:53.352 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:53.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:53.352 --rc genhtml_branch_coverage=1 00:22:53.352 --rc genhtml_function_coverage=1 00:22:53.352 --rc genhtml_legend=1 00:22:53.352 --rc geninfo_all_blocks=1 00:22:53.352 --rc geninfo_unexecuted_blocks=1 00:22:53.352 00:22:53.352 ' 00:22:53.352 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:53.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:53.352 --rc genhtml_branch_coverage=1 00:22:53.352 --rc genhtml_function_coverage=1 00:22:53.352 --rc genhtml_legend=1 00:22:53.352 --rc geninfo_all_blocks=1 00:22:53.352 --rc geninfo_unexecuted_blocks=1 00:22:53.352 00:22:53.352 ' 00:22:53.352 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:53.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:53.352 --rc genhtml_branch_coverage=1 00:22:53.352 --rc genhtml_function_coverage=1 00:22:53.352 --rc genhtml_legend=1 00:22:53.352 --rc geninfo_all_blocks=1 00:22:53.352 --rc geninfo_unexecuted_blocks=1 00:22:53.352 00:22:53.352 ' 00:22:53.352 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:53.352 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:22:53.352 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:53.352 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:53.352 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:53.352 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:53.352 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:53.352 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:53.352 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:53.352 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:53.352 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:53.352 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:53.352 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:22:53.352 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:22:53.352 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:53.352 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:53.352 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:53.352 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:53.352 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:53.352 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:22:53.352 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:53.352 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:53.352 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:53.352 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.352 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.352 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.352 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:22:53.352 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.352 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:22:53.352 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:53.353 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:53.353 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:53.353 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:53.353 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:53.353 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:53.353 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:53.353 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:53.353 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:53.353 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:53.353 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:53.353 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:53.353 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:22:53.353 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:53.353 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:53.353 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:53.353 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:53.353 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:53.353 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:53.353 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:53.353 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:53.353 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:22:53.353 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:22:53.353 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:22:53.353 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:22:53.353 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:22:53.353 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:22:53.353 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:53.353 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:53.353 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:53.353 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:53.353 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:53.353 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:53.353 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:53.353 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:53.353 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:53.353 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:53.353 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:53.353 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:53.353 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:53.353 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:53.353 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:53.353 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:53.353 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:53.353 Cannot find device "nvmf_init_br" 00:22:53.353 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # true 00:22:53.353 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:53.353 Cannot find device "nvmf_init_br2" 00:22:53.353 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # true 00:22:53.353 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:53.353 Cannot find device "nvmf_tgt_br" 00:22:53.353 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@164 -- # true 00:22:53.353 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:53.353 Cannot find device "nvmf_tgt_br2" 00:22:53.353 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@165 -- # true 00:22:53.353 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:53.353 Cannot find device "nvmf_init_br" 00:22:53.353 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # true 00:22:53.353 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:53.353 Cannot find device "nvmf_init_br2" 00:22:53.353 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@167 -- # true 00:22:53.353 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:53.353 Cannot find device "nvmf_tgt_br" 00:22:53.353 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@168 -- # true 00:22:53.353 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:53.353 Cannot find device "nvmf_tgt_br2" 00:22:53.353 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # true 00:22:53.353 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:53.353 Cannot find device "nvmf_br" 00:22:53.353 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # true 00:22:53.353 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:53.353 Cannot find device "nvmf_init_if" 00:22:53.353 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # true 00:22:53.353 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:53.353 Cannot find device "nvmf_init_if2" 00:22:53.353 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@172 -- # true 00:22:53.353 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:53.353 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:53.353 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@173 -- # true 00:22:53.353 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:53.353 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:53.353 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # true 00:22:53.353 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:53.353 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:53.353 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:53.353 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:53.353 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:53.353 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:53.613 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:53.613 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:53.613 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:53.613 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:53.613 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:53.613 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:53.613 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:53.613 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:53.613 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:53.613 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:53.613 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:53.613 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:53.613 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:53.613 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:53.613 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:53.613 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:53.613 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:53.613 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:53.613 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:53.613 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:53.613 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:53.613 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:53.613 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:53.613 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:53.613 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:53.613 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:53.613 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:53.613 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:53.613 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:22:53.613 00:22:53.613 --- 10.0.0.3 ping statistics --- 00:22:53.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:53.613 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:22:53.613 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:53.613 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:53.613 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.025 ms 00:22:53.613 00:22:53.613 --- 10.0.0.4 ping statistics --- 00:22:53.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:53.613 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:22:53.613 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:53.613 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:53.613 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.012 ms 00:22:53.613 00:22:53.613 --- 10.0.0.1 ping statistics --- 00:22:53.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:53.613 rtt min/avg/max/mdev = 0.012/0.012/0.012/0.000 ms 00:22:53.613 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:53.613 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:53.613 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:22:53.613 00:22:53.613 --- 10.0.0.2 ping statistics --- 00:22:53.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:53.613 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:22:53.613 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:53.613 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@461 -- # return 0 00:22:53.613 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:53.613 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:53.613 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:53.613 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:53.613 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:53.613 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:53.613 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:53.613 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:22:53.613 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:53.613 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:53.613 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:53.613 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # nvmfpid=92802 00:22:53.613 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # waitforlisten 92802 00:22:53.613 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@833 -- # '[' -z 92802 ']' 00:22:53.613 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:53.613 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:53.613 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:53.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:53.613 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:53.613 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:53.614 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:53.614 [2024-11-06 14:27:21.212551] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:22:53.614 [2024-11-06 14:27:21.212661] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:53.871 [2024-11-06 14:27:21.372775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:53.871 [2024-11-06 14:27:21.472765] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:53.871 [2024-11-06 14:27:21.472810] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:53.871 [2024-11-06 14:27:21.472822] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:53.871 [2024-11-06 14:27:21.472829] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:53.871 [2024-11-06 14:27:21.472838] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:53.871 [2024-11-06 14:27:21.474235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:53.871 [2024-11-06 14:27:21.474324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:53.871 [2024-11-06 14:27:21.474557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:53.871 [2024-11-06 14:27:21.474589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:54.438 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:54.438 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@866 -- # return 0 00:22:54.438 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:54.438 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:54.438 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:54.697 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:54.697 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:22:54.697 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:54.697 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.697 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:54.697 Malloc0 00:22:54.697 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.697 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:22:54.697 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.697 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:54.697 Delay0 00:22:54.697 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.697 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:54.697 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.697 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:54.697 [2024-11-06 14:27:22.166684] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:54.697 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.697 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:22:54.697 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.697 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:54.697 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.697 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:54.697 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.697 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:54.697 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.697 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:54.697 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.697 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:54.697 [2024-11-06 14:27:22.194863] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:54.697 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.697 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid=9ca3beaa-4765-4780-96a0-c04fd97e7435 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:22:54.955 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:22:54.955 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # local i=0 00:22:54.955 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:22:54.955 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:22:54.955 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # sleep 2 00:22:56.855 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:22:56.855 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:22:56.855 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:22:56.855 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:22:56.855 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:22:56.855 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # return 0 00:22:56.855 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=92884 00:22:56.855 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:22:56.855 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:22:56.855 [global] 00:22:56.855 thread=1 00:22:56.855 invalidate=1 00:22:56.855 rw=write 00:22:56.855 time_based=1 00:22:56.855 runtime=60 00:22:56.855 ioengine=libaio 00:22:56.855 direct=1 00:22:56.855 bs=4096 00:22:56.855 iodepth=1 00:22:56.855 norandommap=0 00:22:56.855 numjobs=1 00:22:56.855 00:22:56.855 verify_dump=1 00:22:56.855 verify_backlog=512 00:22:56.855 verify_state_save=0 00:22:56.855 do_verify=1 00:22:56.855 verify=crc32c-intel 00:22:56.855 [job0] 00:22:56.855 filename=/dev/nvme0n1 00:22:56.855 Could not set queue depth (nvme0n1) 00:22:57.112 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:57.112 fio-3.35 00:22:57.112 Starting 1 thread 00:23:00.396 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:23:00.396 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.396 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:00.396 true 00:23:00.396 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.396 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:23:00.396 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.396 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:00.396 true 00:23:00.396 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.396 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:23:00.396 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.396 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:00.396 true 00:23:00.396 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.396 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:23:00.396 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.396 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:00.396 true 00:23:00.397 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.397 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:23:02.929 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:23:02.929 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.929 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:02.929 true 00:23:02.929 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.929 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:23:02.929 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.929 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:02.929 true 00:23:02.929 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.929 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:23:02.929 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.929 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:02.929 true 00:23:02.929 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.929 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:23:02.929 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.929 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:02.929 true 00:23:02.929 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.929 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:23:02.929 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 92884 00:23:59.151 00:23:59.151 job0: (groupid=0, jobs=1): err= 0: pid=92905: Wed Nov 6 14:28:24 2024 00:23:59.151 read: IOPS=1160, BW=4642KiB/s (4754kB/s)(272MiB/60000msec) 00:23:59.151 slat (nsec): min=4597, max=66073, avg=6097.37, stdev=1592.54 00:23:59.151 clat (usec): min=116, max=2433, avg=143.22, stdev=18.02 00:23:59.151 lat (usec): min=121, max=2440, avg=149.32, stdev=18.42 00:23:59.151 clat percentiles (usec): 00:23:59.151 | 1.00th=[ 125], 5.00th=[ 129], 10.00th=[ 131], 20.00th=[ 135], 00:23:59.151 | 30.00th=[ 137], 40.00th=[ 139], 50.00th=[ 141], 60.00th=[ 145], 00:23:59.151 | 70.00th=[ 147], 80.00th=[ 151], 90.00th=[ 157], 95.00th=[ 165], 00:23:59.151 | 99.00th=[ 184], 99.50th=[ 190], 99.90th=[ 233], 99.95th=[ 379], 00:23:59.151 | 99.99th=[ 494] 00:23:59.151 write: IOPS=1162, BW=4652KiB/s (4763kB/s)(273MiB/60000msec); 0 zone resets 00:23:59.151 slat (usec): min=7, max=10024, avg= 9.75, stdev=48.28 00:23:59.151 clat (usec): min=97, max=40589k, avg=699.77, stdev=153661.29 00:23:59.151 lat (usec): min=105, max=40589k, avg=709.52, stdev=153661.30 00:23:59.151 clat percentiles (usec): 00:23:59.151 | 1.00th=[ 103], 5.00th=[ 106], 10.00th=[ 109], 20.00th=[ 111], 00:23:59.151 | 30.00th=[ 113], 40.00th=[ 115], 50.00th=[ 117], 60.00th=[ 119], 00:23:59.151 | 70.00th=[ 121], 80.00th=[ 124], 90.00th=[ 130], 95.00th=[ 137], 00:23:59.151 | 99.00th=[ 151], 99.50th=[ 157], 99.90th=[ 255], 99.95th=[ 281], 00:23:59.151 | 99.99th=[ 375] 00:23:59.151 bw ( KiB/s): min= 3960, max=16384, per=100.00%, avg=13967.72, stdev=2264.85, samples=39 00:23:59.151 iops : min= 990, max= 4096, avg=3491.92, stdev=566.21, samples=39 00:23:59.151 lat (usec) : 100=0.06%, 250=99.84%, 500=0.10%, 750=0.01%, 1000=0.01% 00:23:59.151 lat (msec) : 2=0.01%, 4=0.01%, >=2000=0.01% 00:23:59.151 cpu : usr=0.32%, sys=1.36%, ctx=139418, majf=0, minf=5 00:23:59.151 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:59.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.151 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.151 issued rwts: total=69632,69774,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:59.151 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:59.151 00:23:59.151 Run status group 0 (all jobs): 00:23:59.151 READ: bw=4642KiB/s (4754kB/s), 4642KiB/s-4642KiB/s (4754kB/s-4754kB/s), io=272MiB (285MB), run=60000-60000msec 00:23:59.151 WRITE: bw=4652KiB/s (4763kB/s), 4652KiB/s-4652KiB/s (4763kB/s-4763kB/s), io=273MiB (286MB), run=60000-60000msec 00:23:59.151 00:23:59.151 Disk stats (read/write): 00:23:59.151 nvme0n1: ios=69492/69526, merge=0/0, ticks=10109/8492, in_queue=18601, util=99.54% 00:23:59.151 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:59.151 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:59.151 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:23:59.151 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1221 -- # local i=0 00:23:59.151 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:23:59.151 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:59.151 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:23:59.151 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:59.151 nvmf hotplug test: fio successful as expected 00:23:59.151 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1233 -- # return 0 00:23:59.151 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:23:59.151 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:23:59.151 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:59.152 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.152 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:59.152 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.152 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:23:59.152 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:23:59.152 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:23:59.152 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:59.152 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:23:59.152 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:59.152 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:23:59.152 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:59.152 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:59.152 rmmod nvme_tcp 00:23:59.152 rmmod nvme_fabrics 00:23:59.152 rmmod nvme_keyring 00:23:59.152 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:59.152 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:23:59.152 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:23:59.152 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@517 -- # '[' -n 92802 ']' 00:23:59.152 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # killprocess 92802 00:23:59.152 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # '[' -z 92802 ']' 00:23:59.152 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # kill -0 92802 00:23:59.152 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@957 -- # uname 00:23:59.152 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:59.152 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 92802 00:23:59.152 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:59.152 killing process with pid 92802 00:23:59.152 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:59.152 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@970 -- # echo 'killing process with pid 92802' 00:23:59.152 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@971 -- # kill 92802 00:23:59.152 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@976 -- # wait 92802 00:23:59.152 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:59.152 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:59.152 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:59.152 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:23:59.152 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-save 00:23:59.152 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:23:59.152 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:59.152 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:59.152 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:59.152 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:59.152 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:59.152 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:59.152 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:59.152 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:59.152 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:59.152 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:59.152 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:59.152 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:59.152 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:59.152 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:59.152 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:59.152 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:59.152 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:59.152 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:59.152 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:59.152 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:59.152 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@300 -- # return 0 00:23:59.152 ************************************ 00:23:59.152 END TEST nvmf_initiator_timeout 00:23:59.152 ************************************ 00:23:59.152 00:23:59.152 real 1m5.107s 00:23:59.152 user 4m9.151s 00:23:59.152 sys 0m6.230s 00:23:59.152 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:59.152 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:59.152 14:28:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:23:59.152 14:28:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:23:59.152 14:28:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:59.152 14:28:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:59.152 14:28:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:59.152 ************************************ 00:23:59.152 START TEST nvmf_nsid 00:23:59.152 ************************************ 00:23:59.152 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:23:59.152 * Looking for test storage... 00:23:59.152 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:59.152 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:59.152 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lcov --version 00:23:59.152 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:59.152 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:59.152 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:59.152 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:59.152 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:59.152 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:23:59.152 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:23:59.152 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:23:59.152 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:23:59.152 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:23:59.152 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:23:59.152 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:23:59.152 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:59.152 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:23:59.152 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:23:59.152 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:59.152 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:59.152 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:23:59.152 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:23:59.152 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:59.152 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:23:59.152 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:23:59.152 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:23:59.152 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:23:59.152 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:59.152 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:23:59.152 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:23:59.152 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:59.152 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:59.152 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:23:59.152 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:59.153 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:59.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:59.153 --rc genhtml_branch_coverage=1 00:23:59.153 --rc genhtml_function_coverage=1 00:23:59.153 --rc genhtml_legend=1 00:23:59.153 --rc geninfo_all_blocks=1 00:23:59.153 --rc geninfo_unexecuted_blocks=1 00:23:59.153 00:23:59.153 ' 00:23:59.153 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:59.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:59.153 --rc genhtml_branch_coverage=1 00:23:59.153 --rc genhtml_function_coverage=1 00:23:59.153 --rc genhtml_legend=1 00:23:59.153 --rc geninfo_all_blocks=1 00:23:59.153 --rc geninfo_unexecuted_blocks=1 00:23:59.153 00:23:59.153 ' 00:23:59.153 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:59.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:59.153 --rc genhtml_branch_coverage=1 00:23:59.153 --rc genhtml_function_coverage=1 00:23:59.153 --rc genhtml_legend=1 00:23:59.153 --rc geninfo_all_blocks=1 00:23:59.153 --rc geninfo_unexecuted_blocks=1 00:23:59.153 00:23:59.153 ' 00:23:59.153 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:59.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:59.153 --rc genhtml_branch_coverage=1 00:23:59.153 --rc genhtml_function_coverage=1 00:23:59.153 --rc genhtml_legend=1 00:23:59.153 --rc geninfo_all_blocks=1 00:23:59.153 --rc geninfo_unexecuted_blocks=1 00:23:59.153 00:23:59.153 ' 00:23:59.153 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:59.153 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:23:59.153 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:59.153 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:59.153 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:59.153 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:59.153 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:59.153 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:59.153 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:59.153 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:59.153 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:59.153 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:59.153 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:23:59.153 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:23:59.153 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:59.153 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:59.153 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:59.153 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:59.153 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:59.153 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:23:59.153 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:59.153 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:59.153 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:59.153 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.153 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.153 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.153 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:23:59.153 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.153 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:23:59.153 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:59.153 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:59.153 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:59.153 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:59.153 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:59.153 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:59.153 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:59.153 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:59.153 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:59.153 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:59.153 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:23:59.153 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:23:59.153 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:23:59.153 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:23:59.153 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:23:59.153 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:23:59.153 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:59.153 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:59.153 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:59.153 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:59.153 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:59.153 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:59.153 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:59.153 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:59.153 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:23:59.153 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:23:59.153 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:23:59.153 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:23:59.153 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:23:59.153 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:23:59.153 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:59.153 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:59.153 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:59.153 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:59.153 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:59.153 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:59.153 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:59.153 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:59.153 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:59.153 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:59.154 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:59.154 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:59.154 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:59.154 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:59.154 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:59.154 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:59.154 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:59.154 Cannot find device "nvmf_init_br" 00:23:59.154 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # true 00:23:59.154 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:59.154 Cannot find device "nvmf_init_br2" 00:23:59.154 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # true 00:23:59.154 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:59.154 Cannot find device "nvmf_tgt_br" 00:23:59.154 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # true 00:23:59.154 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:59.154 Cannot find device "nvmf_tgt_br2" 00:23:59.154 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # true 00:23:59.154 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:59.154 Cannot find device "nvmf_init_br" 00:23:59.154 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # true 00:23:59.154 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:59.154 Cannot find device "nvmf_init_br2" 00:23:59.154 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # true 00:23:59.154 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:59.154 Cannot find device "nvmf_tgt_br" 00:23:59.154 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # true 00:23:59.154 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:59.154 Cannot find device "nvmf_tgt_br2" 00:23:59.154 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # true 00:23:59.154 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:59.154 Cannot find device "nvmf_br" 00:23:59.154 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # true 00:23:59.154 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:59.154 Cannot find device "nvmf_init_if" 00:23:59.154 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # true 00:23:59.154 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:59.154 Cannot find device "nvmf_init_if2" 00:23:59.154 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # true 00:23:59.154 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:59.154 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:59.154 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # true 00:23:59.154 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:59.154 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:59.154 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # true 00:23:59.154 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:59.154 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:59.154 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:59.154 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:59.154 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:59.154 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:59.154 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:59.154 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:59.154 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:59.154 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:59.154 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:59.154 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:59.154 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:59.154 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:59.154 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:59.154 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:59.154 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:59.154 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:59.154 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:59.154 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:59.154 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:59.154 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:59.154 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:59.154 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:59.154 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:59.154 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:59.154 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:59.154 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:59.154 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:59.154 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:59.154 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:59.154 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:59.154 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:59.154 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:59.154 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:23:59.154 00:23:59.154 --- 10.0.0.3 ping statistics --- 00:23:59.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:59.154 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:23:59.154 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:59.154 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:59.154 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.033 ms 00:23:59.154 00:23:59.154 --- 10.0.0.4 ping statistics --- 00:23:59.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:59.154 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:23:59.154 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:59.154 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:59.154 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:23:59.154 00:23:59.154 --- 10.0.0.1 ping statistics --- 00:23:59.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:59.154 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:23:59.154 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:59.154 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:59.154 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:23:59.154 00:23:59.154 --- 10.0.0.2 ping statistics --- 00:23:59.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:59.154 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:23:59.154 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:59.154 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@461 -- # return 0 00:23:59.154 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:59.154 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:59.154 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:59.154 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:59.154 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:59.154 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:59.154 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:59.154 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:23:59.154 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:59.155 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:59.155 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:59.155 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=93785 00:23:59.155 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 93785 00:23:59.155 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 93785 ']' 00:23:59.155 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:59.155 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:23:59.155 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:59.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:59.155 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:59.155 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:59.155 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:59.155 [2024-11-06 14:28:26.407492] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:23:59.155 [2024-11-06 14:28:26.407597] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:59.155 [2024-11-06 14:28:26.566897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:59.155 [2024-11-06 14:28:26.668007] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:59.155 [2024-11-06 14:28:26.668052] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:59.155 [2024-11-06 14:28:26.668063] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:59.155 [2024-11-06 14:28:26.668079] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:59.155 [2024-11-06 14:28:26.668087] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:59.155 [2024-11-06 14:28:26.668939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:59.752 14:28:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:59.752 14:28:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:23:59.752 14:28:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:59.752 14:28:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:59.752 14:28:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:59.752 14:28:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:59.752 14:28:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:59.752 14:28:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=93829 00:23:59.752 14:28:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.3 00:23:59.752 14:28:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:23:59.752 14:28:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:23:59.752 14:28:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:23:59.752 14:28:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:59.752 14:28:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:59.752 14:28:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:59.752 14:28:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:59.752 14:28:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:59.752 14:28:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:59.752 14:28:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:59.752 14:28:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:59.752 14:28:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:59.752 14:28:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:23:59.752 14:28:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:23:59.752 14:28:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=5079a1f6-0c38-45da-9b91-2e182eacdf8b 00:23:59.752 14:28:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:23:59.752 14:28:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=40a2bf2f-31df-4065-b381-5ef1931c8971 00:23:59.752 14:28:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:23:59.752 14:28:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=c936782a-67e7-4410-80f8-32e3247ba04d 00:23:59.752 14:28:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:23:59.752 14:28:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.752 14:28:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:59.752 null0 00:23:59.752 null1 00:23:59.752 null2 00:23:59.752 [2024-11-06 14:28:27.344485] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:59.752 [2024-11-06 14:28:27.368628] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:59.752 [2024-11-06 14:28:27.375160] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:23:59.752 [2024-11-06 14:28:27.375262] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93829 ] 00:24:00.011 14:28:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.011 14:28:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 93829 /var/tmp/tgt2.sock 00:24:00.011 14:28:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 93829 ']' 00:24:00.011 14:28:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/tgt2.sock 00:24:00.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:24:00.011 14:28:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:00.011 14:28:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:24:00.011 14:28:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:00.011 14:28:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:00.011 [2024-11-06 14:28:27.530080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:00.011 [2024-11-06 14:28:27.613212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:00.944 14:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:00.944 14:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:24:00.944 14:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:24:01.202 [2024-11-06 14:28:28.632397] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:01.202 [2024-11-06 14:28:28.648536] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:24:01.202 nvme0n1 nvme0n2 00:24:01.202 nvme1n1 00:24:01.202 14:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:24:01.202 14:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:24:01.202 14:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:24:01.202 14:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:24:01.202 14:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:24:01.202 14:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:24:01.202 14:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:24:01.202 14:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 00:24:01.202 14:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:24:01.202 14:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:24:01.202 14:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:24:01.202 14:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:24:01.202 14:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:24:01.202 14:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # '[' 0 -lt 15 ']' 00:24:01.459 14:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # i=1 00:24:01.459 14:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # sleep 1 00:24:02.393 14:28:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:24:02.393 14:28:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:24:02.393 14:28:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:24:02.393 14:28:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:24:02.393 14:28:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:24:02.393 14:28:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 5079a1f6-0c38-45da-9b91-2e182eacdf8b 00:24:02.393 14:28:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:24:02.393 14:28:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:24:02.393 14:28:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:24:02.393 14:28:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:24:02.393 14:28:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:24:02.393 14:28:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=5079a1f60c3845da9b912e182eacdf8b 00:24:02.393 14:28:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 5079A1F60C3845DA9B912E182EACDF8B 00:24:02.393 14:28:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 5079A1F60C3845DA9B912E182EACDF8B == \5\0\7\9\A\1\F\6\0\C\3\8\4\5\D\A\9\B\9\1\2\E\1\8\2\E\A\C\D\F\8\B ]] 00:24:02.393 14:28:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:24:02.393 14:28:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:24:02.393 14:28:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n2 00:24:02.393 14:28:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:24:02.393 14:28:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:24:02.393 14:28:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n2 00:24:02.393 14:28:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:24:02.394 14:28:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 40a2bf2f-31df-4065-b381-5ef1931c8971 00:24:02.394 14:28:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:24:02.394 14:28:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:24:02.394 14:28:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:24:02.394 14:28:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:24:02.394 14:28:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:24:02.394 14:28:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=40a2bf2f31df4065b3815ef1931c8971 00:24:02.394 14:28:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 40A2BF2F31DF4065B3815EF1931C8971 00:24:02.394 14:28:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 40A2BF2F31DF4065B3815EF1931C8971 == \4\0\A\2\B\F\2\F\3\1\D\F\4\0\6\5\B\3\8\1\5\E\F\1\9\3\1\C\8\9\7\1 ]] 00:24:02.394 14:28:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:24:02.394 14:28:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:24:02.394 14:28:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:24:02.394 14:28:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n3 00:24:02.394 14:28:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n3 00:24:02.394 14:28:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:24:02.394 14:28:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:24:02.394 14:28:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid c936782a-67e7-4410-80f8-32e3247ba04d 00:24:02.394 14:28:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:24:02.394 14:28:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:24:02.394 14:28:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:24:02.394 14:28:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:24:02.394 14:28:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:24:02.394 14:28:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=c936782a67e7441080f832e3247ba04d 00:24:02.394 14:28:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo C936782A67E7441080F832E3247BA04D 00:24:02.394 14:28:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ C936782A67E7441080F832E3247BA04D == \C\9\3\6\7\8\2\A\6\7\E\7\4\4\1\0\8\0\F\8\3\2\E\3\2\4\7\B\A\0\4\D ]] 00:24:02.394 14:28:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:24:02.651 14:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:24:02.651 14:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:24:02.652 14:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 93829 00:24:02.652 14:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 93829 ']' 00:24:02.652 14:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 93829 00:24:02.652 14:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:24:02.652 14:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:02.652 14:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 93829 00:24:02.652 14:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:02.652 14:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:02.652 killing process with pid 93829 00:24:02.652 14:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 93829' 00:24:02.652 14:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 93829 00:24:02.652 14:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 93829 00:24:04.024 14:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:24:04.024 14:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:04.024 14:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:24:04.024 14:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:04.024 14:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:24:04.024 14:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:04.024 14:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:04.024 rmmod nvme_tcp 00:24:04.024 rmmod nvme_fabrics 00:24:04.024 rmmod nvme_keyring 00:24:04.024 14:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:04.024 14:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:24:04.024 14:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:24:04.024 14:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 93785 ']' 00:24:04.024 14:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 93785 00:24:04.024 14:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 93785 ']' 00:24:04.024 14:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 93785 00:24:04.024 14:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:24:04.024 14:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:04.024 14:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 93785 00:24:04.024 14:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:04.024 14:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:04.024 14:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 93785' 00:24:04.024 killing process with pid 93785 00:24:04.024 14:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 93785 00:24:04.024 14:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 93785 00:24:04.591 14:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:04.591 14:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:04.591 14:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:04.591 14:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:24:04.591 14:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:24:04.591 14:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:04.591 14:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:24:04.591 14:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:04.591 14:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:04.591 14:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:04.591 14:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:04.591 14:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:04.591 14:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:04.591 14:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:04.591 14:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:04.591 14:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:04.591 14:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:04.591 14:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:04.850 14:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:04.850 14:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:04.850 14:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:04.850 14:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:04.850 14:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:04.850 14:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:04.850 14:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:04.850 14:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:04.850 14:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@300 -- # return 0 00:24:04.850 00:24:04.850 real 0m6.532s 00:24:04.850 user 0m10.301s 00:24:04.850 sys 0m1.203s 00:24:04.850 14:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:04.850 14:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:04.850 ************************************ 00:24:04.850 END TEST nvmf_nsid 00:24:04.850 ************************************ 00:24:04.850 14:28:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:24:04.850 ************************************ 00:24:04.850 END TEST nvmf_target_extra 00:24:04.850 ************************************ 00:24:04.850 00:24:04.850 real 13m37.646s 00:24:04.850 user 41m56.636s 00:24:04.850 sys 1m48.599s 00:24:04.850 14:28:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:04.850 14:28:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:04.850 14:28:32 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:24:04.850 14:28:32 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:04.850 14:28:32 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:04.850 14:28:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:04.850 ************************************ 00:24:04.850 START TEST nvmf_host 00:24:04.850 ************************************ 00:24:04.850 14:28:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:24:05.110 * Looking for test storage... 00:24:05.110 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:24:05.110 14:28:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:05.110 14:28:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lcov --version 00:24:05.110 14:28:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:05.110 14:28:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:05.110 14:28:32 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:05.110 14:28:32 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:05.110 14:28:32 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:05.110 14:28:32 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:05.110 14:28:32 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:05.110 14:28:32 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:05.110 14:28:32 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:05.110 14:28:32 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:05.110 14:28:32 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:05.110 14:28:32 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:05.110 14:28:32 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:05.110 14:28:32 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:24:05.110 14:28:32 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:24:05.110 14:28:32 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:05.110 14:28:32 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:05.110 14:28:32 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:24:05.110 14:28:32 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:24:05.110 14:28:32 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:05.110 14:28:32 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:24:05.110 14:28:32 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:05.110 14:28:32 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:24:05.110 14:28:32 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:24:05.110 14:28:32 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:05.110 14:28:32 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:24:05.110 14:28:32 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:05.110 14:28:32 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:05.110 14:28:32 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:05.110 14:28:32 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:24:05.110 14:28:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:05.110 14:28:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:05.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:05.110 --rc genhtml_branch_coverage=1 00:24:05.110 --rc genhtml_function_coverage=1 00:24:05.110 --rc genhtml_legend=1 00:24:05.110 --rc geninfo_all_blocks=1 00:24:05.110 --rc geninfo_unexecuted_blocks=1 00:24:05.110 00:24:05.110 ' 00:24:05.110 14:28:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:05.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:05.110 --rc genhtml_branch_coverage=1 00:24:05.110 --rc genhtml_function_coverage=1 00:24:05.110 --rc genhtml_legend=1 00:24:05.110 --rc geninfo_all_blocks=1 00:24:05.110 --rc geninfo_unexecuted_blocks=1 00:24:05.110 00:24:05.110 ' 00:24:05.110 14:28:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:05.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:05.110 --rc genhtml_branch_coverage=1 00:24:05.110 --rc genhtml_function_coverage=1 00:24:05.110 --rc genhtml_legend=1 00:24:05.110 --rc geninfo_all_blocks=1 00:24:05.110 --rc geninfo_unexecuted_blocks=1 00:24:05.110 00:24:05.110 ' 00:24:05.110 14:28:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:05.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:05.110 --rc genhtml_branch_coverage=1 00:24:05.110 --rc genhtml_function_coverage=1 00:24:05.110 --rc genhtml_legend=1 00:24:05.110 --rc geninfo_all_blocks=1 00:24:05.110 --rc geninfo_unexecuted_blocks=1 00:24:05.110 00:24:05.110 ' 00:24:05.110 14:28:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:05.110 14:28:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:24:05.110 14:28:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:05.110 14:28:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:05.110 14:28:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:05.110 14:28:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:05.110 14:28:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:05.110 14:28:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:05.110 14:28:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:05.110 14:28:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:05.110 14:28:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:05.110 14:28:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:05.110 14:28:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:24:05.110 14:28:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:24:05.110 14:28:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:05.110 14:28:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:05.110 14:28:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:05.110 14:28:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:05.110 14:28:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:05.110 14:28:32 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:05.110 14:28:32 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:05.110 14:28:32 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:05.110 14:28:32 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:05.110 14:28:32 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.110 14:28:32 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.110 14:28:32 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.110 14:28:32 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:24:05.110 14:28:32 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.110 14:28:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:24:05.110 14:28:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:05.110 14:28:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:05.110 14:28:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:05.110 14:28:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:05.110 14:28:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:05.110 14:28:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:05.110 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:05.110 14:28:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:05.110 14:28:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:05.110 14:28:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:05.110 14:28:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:24:05.110 14:28:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:24:05.110 14:28:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:24:05.111 14:28:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:05.111 14:28:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:05.111 14:28:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:05.111 14:28:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.111 ************************************ 00:24:05.111 START TEST nvmf_multicontroller 00:24:05.111 ************************************ 00:24:05.111 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:05.111 * Looking for test storage... 00:24:05.111 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:05.111 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:05.111 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lcov --version 00:24:05.111 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:05.111 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:05.111 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:05.111 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:05.111 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:05.111 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:24:05.111 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:24:05.111 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:24:05.111 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:24:05.111 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:24:05.111 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:24:05.111 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:24:05.111 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:05.111 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:24:05.111 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:24:05.111 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:05.111 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:05.111 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:24:05.111 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:24:05.111 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:05.111 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:24:05.111 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:24:05.111 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:24:05.111 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:24:05.111 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:05.111 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:24:05.111 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:24:05.111 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:05.111 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:05.111 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:24:05.111 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:05.111 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:05.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:05.111 --rc genhtml_branch_coverage=1 00:24:05.111 --rc genhtml_function_coverage=1 00:24:05.111 --rc genhtml_legend=1 00:24:05.111 --rc geninfo_all_blocks=1 00:24:05.111 --rc geninfo_unexecuted_blocks=1 00:24:05.111 00:24:05.111 ' 00:24:05.111 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:05.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:05.111 --rc genhtml_branch_coverage=1 00:24:05.111 --rc genhtml_function_coverage=1 00:24:05.111 --rc genhtml_legend=1 00:24:05.111 --rc geninfo_all_blocks=1 00:24:05.111 --rc geninfo_unexecuted_blocks=1 00:24:05.111 00:24:05.111 ' 00:24:05.111 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:05.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:05.111 --rc genhtml_branch_coverage=1 00:24:05.111 --rc genhtml_function_coverage=1 00:24:05.111 --rc genhtml_legend=1 00:24:05.111 --rc geninfo_all_blocks=1 00:24:05.111 --rc geninfo_unexecuted_blocks=1 00:24:05.111 00:24:05.111 ' 00:24:05.111 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:05.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:05.111 --rc genhtml_branch_coverage=1 00:24:05.111 --rc genhtml_function_coverage=1 00:24:05.111 --rc genhtml_legend=1 00:24:05.111 --rc geninfo_all_blocks=1 00:24:05.111 --rc geninfo_unexecuted_blocks=1 00:24:05.111 00:24:05.111 ' 00:24:05.111 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:05.111 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:24:05.111 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:05.111 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:05.111 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:05.111 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:05.111 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:05.111 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:05.111 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:05.111 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:05.111 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:05.111 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:05.111 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:24:05.111 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:24:05.111 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:05.111 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:05.111 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:05.111 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:05.111 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:05.111 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:24:05.111 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:05.111 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:05.111 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:05.111 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.111 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.111 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.111 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:24:05.111 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.370 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:24:05.370 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:05.370 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:05.370 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:05.370 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:05.370 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:05.370 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:05.370 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:05.370 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:05.370 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:05.370 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:05.370 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@460 -- # nvmf_veth_init 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:05.371 Cannot find device "nvmf_init_br" 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@162 -- # true 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:05.371 Cannot find device "nvmf_init_br2" 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@163 -- # true 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:05.371 Cannot find device "nvmf_tgt_br" 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@164 -- # true 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:05.371 Cannot find device "nvmf_tgt_br2" 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@165 -- # true 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:05.371 Cannot find device "nvmf_init_br" 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@166 -- # true 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:05.371 Cannot find device "nvmf_init_br2" 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@167 -- # true 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:05.371 Cannot find device "nvmf_tgt_br" 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@168 -- # true 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:05.371 Cannot find device "nvmf_tgt_br2" 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@169 -- # true 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:05.371 Cannot find device "nvmf_br" 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@170 -- # true 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:05.371 Cannot find device "nvmf_init_if" 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@171 -- # true 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:05.371 Cannot find device "nvmf_init_if2" 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@172 -- # true 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:05.371 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@173 -- # true 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:05.371 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@174 -- # true 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:05.371 14:28:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:05.371 14:28:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:05.630 14:28:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:05.630 14:28:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:05.630 14:28:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:05.630 14:28:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:05.630 14:28:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:05.630 14:28:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:05.630 14:28:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:05.630 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:05.630 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:24:05.630 00:24:05.630 --- 10.0.0.3 ping statistics --- 00:24:05.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.630 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:24:05.630 14:28:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:05.630 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:05.630 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:24:05.630 00:24:05.630 --- 10.0.0.4 ping statistics --- 00:24:05.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.630 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:24:05.630 14:28:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:05.630 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:05.630 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:24:05.630 00:24:05.630 --- 10.0.0.1 ping statistics --- 00:24:05.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.630 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:24:05.630 14:28:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:05.630 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:05.630 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:24:05.630 00:24:05.630 --- 10.0.0.2 ping statistics --- 00:24:05.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.630 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:24:05.630 14:28:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:05.630 14:28:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@461 -- # return 0 00:24:05.630 14:28:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:05.630 14:28:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:05.630 14:28:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:05.630 14:28:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:05.630 14:28:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:05.630 14:28:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:05.630 14:28:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:05.630 14:28:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:24:05.630 14:28:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:05.630 14:28:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:05.630 14:28:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:05.630 14:28:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=94221 00:24:05.630 14:28:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 94221 00:24:05.630 14:28:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # '[' -z 94221 ']' 00:24:05.630 14:28:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:05.630 14:28:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:05.630 14:28:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:05.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:05.630 14:28:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:05.630 14:28:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:05.630 14:28:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:05.630 [2024-11-06 14:28:33.127638] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:24:05.630 [2024-11-06 14:28:33.127743] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:05.889 [2024-11-06 14:28:33.287094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:05.889 [2024-11-06 14:28:33.406713] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:05.889 [2024-11-06 14:28:33.406766] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:05.889 [2024-11-06 14:28:33.406779] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:05.889 [2024-11-06 14:28:33.406787] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:05.889 [2024-11-06 14:28:33.406798] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:05.889 [2024-11-06 14:28:33.408344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:05.889 [2024-11-06 14:28:33.408455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:05.889 [2024-11-06 14:28:33.408454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:06.454 14:28:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:06.454 14:28:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@866 -- # return 0 00:24:06.454 14:28:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:06.454 14:28:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:06.454 14:28:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:06.454 14:28:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:06.454 14:28:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:06.454 14:28:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.454 14:28:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:06.454 [2024-11-06 14:28:34.084660] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:06.712 14:28:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.712 14:28:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:06.712 14:28:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.712 14:28:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:06.712 Malloc0 00:24:06.712 14:28:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.712 14:28:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:06.712 14:28:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.712 14:28:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:06.712 14:28:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.712 14:28:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:06.712 14:28:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.712 14:28:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:06.712 14:28:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.712 14:28:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:06.712 14:28:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.712 14:28:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:06.712 [2024-11-06 14:28:34.180831] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:06.712 14:28:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.712 14:28:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:24:06.712 14:28:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.712 14:28:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:06.712 [2024-11-06 14:28:34.188727] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:24:06.712 14:28:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.712 14:28:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:06.712 14:28:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.712 14:28:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:06.712 Malloc1 00:24:06.712 14:28:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.712 14:28:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:24:06.712 14:28:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.712 14:28:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:06.712 14:28:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.712 14:28:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:24:06.712 14:28:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.712 14:28:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:06.712 14:28:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.712 14:28:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:24:06.712 14:28:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.712 14:28:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:06.712 14:28:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.712 14:28:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4421 00:24:06.712 14:28:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.712 14:28:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:06.712 14:28:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.712 14:28:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=94279 00:24:06.712 14:28:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:06.712 14:28:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 94279 /var/tmp/bdevperf.sock 00:24:06.712 14:28:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # '[' -z 94279 ']' 00:24:06.712 14:28:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:06.712 14:28:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:24:06.712 14:28:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:06.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:06.712 14:28:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:06.712 14:28:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:06.712 14:28:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:07.644 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:07.644 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@866 -- # return 0 00:24:07.644 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:24:07.644 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.644 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:07.903 NVMe0n1 00:24:07.903 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.903 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:07.903 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.903 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:24:07.903 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:07.903 1 00:24:07.903 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.903 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:24:07.903 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:24:07.903 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:24:07.903 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:07.903 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:07.903 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:07.903 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:07.903 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:24:07.904 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.904 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:07.904 2024/11/06 14:28:35 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 hostnqn:nqn.2021-09-7.io.spdk:00001 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:24:07.904 request: 00:24:07.904 { 00:24:07.904 "method": "bdev_nvme_attach_controller", 00:24:07.904 "params": { 00:24:07.904 "name": "NVMe0", 00:24:07.904 "trtype": "tcp", 00:24:07.904 "traddr": "10.0.0.3", 00:24:07.904 "adrfam": "ipv4", 00:24:07.904 "trsvcid": "4420", 00:24:07.904 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:07.904 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:24:07.904 "hostaddr": "10.0.0.1", 00:24:07.904 "prchk_reftag": false, 00:24:07.904 "prchk_guard": false, 00:24:07.904 "hdgst": false, 00:24:07.904 "ddgst": false, 00:24:07.904 "allow_unrecognized_csi": false 00:24:07.904 } 00:24:07.904 } 00:24:07.904 Got JSON-RPC error response 00:24:07.904 GoRPCClient: error on JSON-RPC call 00:24:07.904 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:07.904 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:24:07.904 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:07.904 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:07.904 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:07.904 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:24:07.904 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:24:07.904 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:24:07.904 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:07.904 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:07.904 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:07.904 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:07.904 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:24:07.904 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.904 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:07.904 2024/11/06 14:28:35 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:24:07.904 request: 00:24:07.904 { 00:24:07.904 "method": "bdev_nvme_attach_controller", 00:24:07.904 "params": { 00:24:07.904 "name": "NVMe0", 00:24:07.904 "trtype": "tcp", 00:24:07.904 "traddr": "10.0.0.3", 00:24:07.904 "adrfam": "ipv4", 00:24:07.904 "trsvcid": "4420", 00:24:07.904 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:07.904 "hostaddr": "10.0.0.1", 00:24:07.904 "prchk_reftag": false, 00:24:07.904 "prchk_guard": false, 00:24:07.904 "hdgst": false, 00:24:07.904 "ddgst": false, 00:24:07.904 "allow_unrecognized_csi": false 00:24:07.904 } 00:24:07.904 } 00:24:07.904 Got JSON-RPC error response 00:24:07.904 GoRPCClient: error on JSON-RPC call 00:24:07.904 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:07.904 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:24:07.904 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:07.904 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:07.904 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:07.904 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:24:07.904 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:24:07.904 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:24:07.904 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:07.904 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:07.904 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:07.904 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:07.904 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:24:07.904 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.904 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:07.904 2024/11/06 14:28:35 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 multipath:disable name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:24:07.904 request: 00:24:07.904 { 00:24:07.904 "method": "bdev_nvme_attach_controller", 00:24:07.904 "params": { 00:24:07.904 "name": "NVMe0", 00:24:07.904 "trtype": "tcp", 00:24:07.904 "traddr": "10.0.0.3", 00:24:07.904 "adrfam": "ipv4", 00:24:07.904 "trsvcid": "4420", 00:24:07.904 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:07.904 "hostaddr": "10.0.0.1", 00:24:07.904 "prchk_reftag": false, 00:24:07.904 "prchk_guard": false, 00:24:07.904 "hdgst": false, 00:24:07.904 "ddgst": false, 00:24:07.904 "multipath": "disable", 00:24:07.904 "allow_unrecognized_csi": false 00:24:07.904 } 00:24:07.904 } 00:24:07.904 Got JSON-RPC error response 00:24:07.904 GoRPCClient: error on JSON-RPC call 00:24:07.904 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:07.904 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:24:07.904 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:07.904 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:07.904 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:07.904 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:24:07.904 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:24:07.904 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:24:07.904 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:07.904 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:07.904 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:07.904 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:07.904 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:24:07.904 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.904 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:07.904 2024/11/06 14:28:35 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 multipath:failover name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:24:07.904 request: 00:24:07.904 { 00:24:07.904 "method": "bdev_nvme_attach_controller", 00:24:07.904 "params": { 00:24:07.904 "name": "NVMe0", 00:24:07.904 "trtype": "tcp", 00:24:07.904 "traddr": "10.0.0.3", 00:24:07.904 "adrfam": "ipv4", 00:24:07.904 "trsvcid": "4420", 00:24:07.904 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:07.904 "hostaddr": "10.0.0.1", 00:24:07.904 "prchk_reftag": false, 00:24:07.904 "prchk_guard": false, 00:24:07.904 "hdgst": false, 00:24:07.905 "ddgst": false, 00:24:07.905 "multipath": "failover", 00:24:07.905 "allow_unrecognized_csi": false 00:24:07.905 } 00:24:07.905 } 00:24:07.905 Got JSON-RPC error response 00:24:07.905 GoRPCClient: error on JSON-RPC call 00:24:07.905 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:07.905 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:24:07.905 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:07.905 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:07.905 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:07.905 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:07.905 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.905 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:07.905 NVMe0n1 00:24:07.905 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.905 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:07.905 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.905 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:07.905 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.905 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:24:07.905 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.905 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:07.905 00:24:07.905 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.905 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:24:07.905 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:07.905 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.905 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:07.905 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.905 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:24:07.905 14:28:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:09.280 { 00:24:09.280 "results": [ 00:24:09.280 { 00:24:09.280 "job": "NVMe0n1", 00:24:09.280 "core_mask": "0x1", 00:24:09.280 "workload": "write", 00:24:09.280 "status": "finished", 00:24:09.280 "queue_depth": 128, 00:24:09.280 "io_size": 4096, 00:24:09.280 "runtime": 1.006762, 00:24:09.280 "iops": 18355.877555966556, 00:24:09.280 "mibps": 71.70264670299436, 00:24:09.280 "io_failed": 0, 00:24:09.280 "io_timeout": 0, 00:24:09.280 "avg_latency_us": 6955.466975690976, 00:24:09.280 "min_latency_us": 4083.396923076923, 00:24:09.280 "max_latency_us": 12250.190769230769 00:24:09.280 } 00:24:09.280 ], 00:24:09.280 "core_count": 1 00:24:09.280 } 00:24:09.280 14:28:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:24:09.280 14:28:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.280 14:28:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:09.280 14:28:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.280 14:28:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n 10.0.0.2 ]] 00:24:09.280 14:28:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme1 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:24:09.280 14:28:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.280 14:28:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:09.280 nvme1n1 00:24:09.280 14:28:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.280 14:28:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@106 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2016-06.io.spdk:cnode2 00:24:09.280 14:28:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@106 -- # jq -r '.[].peer_address.traddr' 00:24:09.280 14:28:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.280 14:28:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:09.280 14:28:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.280 14:28:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@106 -- # [[ 10.0.0.1 == \1\0\.\0\.\0\.\1 ]] 00:24:09.280 14:28:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller nvme1 00:24:09.280 14:28:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.280 14:28:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:09.280 14:28:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.280 14:28:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@109 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme1 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 00:24:09.280 14:28:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.280 14:28:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:09.280 nvme1n1 00:24:09.280 14:28:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.280 14:28:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@113 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2016-06.io.spdk:cnode2 00:24:09.280 14:28:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.280 14:28:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:09.280 14:28:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@113 -- # jq -r '.[].peer_address.traddr' 00:24:09.280 14:28:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.280 14:28:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@113 -- # [[ 10.0.0.2 == \1\0\.\0\.\0\.\2 ]] 00:24:09.280 14:28:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 94279 00:24:09.280 14:28:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' -z 94279 ']' 00:24:09.280 14:28:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # kill -0 94279 00:24:09.280 14:28:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # uname 00:24:09.280 14:28:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:09.280 14:28:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 94279 00:24:09.539 killing process with pid 94279 00:24:09.539 14:28:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:09.539 14:28:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:09.539 14:28:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@970 -- # echo 'killing process with pid 94279' 00:24:09.539 14:28:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@971 -- # kill 94279 00:24:09.539 14:28:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@976 -- # wait 94279 00:24:10.107 14:28:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:10.107 14:28:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.107 14:28:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:10.107 14:28:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.107 14:28:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:10.107 14:28:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.107 14:28:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:10.107 14:28:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.107 14:28:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:24:10.107 14:28:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:10.107 14:28:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:24:10.107 14:28:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:24:10.107 14:28:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:24:10.107 14:28:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:24:10.107 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:24:10.107 [2024-11-06 14:28:34.349985] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:24:10.107 [2024-11-06 14:28:34.350114] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94279 ] 00:24:10.107 [2024-11-06 14:28:34.509687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:10.107 [2024-11-06 14:28:34.612295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:10.107 [2024-11-06 14:28:35.489702] bdev.c:4691:bdev_name_add: *ERROR*: Bdev name 39119ee1-dfc8-4586-9b7c-0bc77b7eb758 already exists 00:24:10.107 [2024-11-06 14:28:35.489760] bdev.c:7842:bdev_register: *ERROR*: Unable to add uuid:39119ee1-dfc8-4586-9b7c-0bc77b7eb758 alias for bdev NVMe1n1 00:24:10.107 [2024-11-06 14:28:35.489780] bdev_nvme.c:4658:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:24:10.107 Running I/O for 1 seconds... 00:24:10.107 18288.00 IOPS, 71.44 MiB/s 00:24:10.107 Latency(us) 00:24:10.107 [2024-11-06T14:28:37.742Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:10.107 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:24:10.107 NVMe0n1 : 1.01 18355.88 71.70 0.00 0.00 6955.47 4083.40 12250.19 00:24:10.107 [2024-11-06T14:28:37.742Z] =================================================================================================================== 00:24:10.107 [2024-11-06T14:28:37.742Z] Total : 18355.88 71.70 0.00 0.00 6955.47 4083.40 12250.19 00:24:10.107 Received shutdown signal, test time was about 1.000000 seconds 00:24:10.107 00:24:10.107 Latency(us) 00:24:10.107 [2024-11-06T14:28:37.742Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:10.107 [2024-11-06T14:28:37.742Z] =================================================================================================================== 00:24:10.107 [2024-11-06T14:28:37.742Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:10.107 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:24:10.107 14:28:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:10.107 14:28:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:24:10.107 14:28:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:24:10.107 14:28:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:10.107 14:28:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:24:10.107 14:28:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:10.107 14:28:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:24:10.107 14:28:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:10.107 14:28:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:10.107 rmmod nvme_tcp 00:24:10.107 rmmod nvme_fabrics 00:24:10.107 rmmod nvme_keyring 00:24:10.107 14:28:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:10.107 14:28:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:24:10.107 14:28:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:24:10.107 14:28:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 94221 ']' 00:24:10.107 14:28:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 94221 00:24:10.107 14:28:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' -z 94221 ']' 00:24:10.107 14:28:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # kill -0 94221 00:24:10.107 14:28:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # uname 00:24:10.107 14:28:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:10.107 14:28:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 94221 00:24:10.107 killing process with pid 94221 00:24:10.107 14:28:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:10.107 14:28:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:10.107 14:28:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@970 -- # echo 'killing process with pid 94221' 00:24:10.107 14:28:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@971 -- # kill 94221 00:24:10.107 14:28:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@976 -- # wait 94221 00:24:11.042 14:28:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:11.042 14:28:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:11.042 14:28:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:11.042 14:28:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:24:11.042 14:28:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:24:11.042 14:28:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:11.042 14:28:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:24:11.042 14:28:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:11.043 14:28:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:11.043 14:28:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:11.043 14:28:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:11.043 14:28:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:11.043 14:28:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:11.043 14:28:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:11.043 14:28:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:11.043 14:28:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:11.043 14:28:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:11.043 14:28:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:11.043 14:28:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:11.043 14:28:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:11.043 14:28:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:11.043 14:28:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:11.043 14:28:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:11.043 14:28:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:11.043 14:28:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:11.043 14:28:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:11.043 14:28:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@300 -- # return 0 00:24:11.043 00:24:11.043 real 0m6.040s 00:24:11.043 user 0m18.588s 00:24:11.043 sys 0m1.030s 00:24:11.043 14:28:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:11.043 ************************************ 00:24:11.043 END TEST nvmf_multicontroller 00:24:11.043 ************************************ 00:24:11.043 14:28:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:11.302 14:28:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:11.302 14:28:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.303 ************************************ 00:24:11.303 START TEST nvmf_aer 00:24:11.303 ************************************ 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:11.303 * Looking for test storage... 00:24:11.303 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lcov --version 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:11.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:11.303 --rc genhtml_branch_coverage=1 00:24:11.303 --rc genhtml_function_coverage=1 00:24:11.303 --rc genhtml_legend=1 00:24:11.303 --rc geninfo_all_blocks=1 00:24:11.303 --rc geninfo_unexecuted_blocks=1 00:24:11.303 00:24:11.303 ' 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:11.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:11.303 --rc genhtml_branch_coverage=1 00:24:11.303 --rc genhtml_function_coverage=1 00:24:11.303 --rc genhtml_legend=1 00:24:11.303 --rc geninfo_all_blocks=1 00:24:11.303 --rc geninfo_unexecuted_blocks=1 00:24:11.303 00:24:11.303 ' 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:11.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:11.303 --rc genhtml_branch_coverage=1 00:24:11.303 --rc genhtml_function_coverage=1 00:24:11.303 --rc genhtml_legend=1 00:24:11.303 --rc geninfo_all_blocks=1 00:24:11.303 --rc geninfo_unexecuted_blocks=1 00:24:11.303 00:24:11.303 ' 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:11.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:11.303 --rc genhtml_branch_coverage=1 00:24:11.303 --rc genhtml_function_coverage=1 00:24:11.303 --rc genhtml_legend=1 00:24:11.303 --rc geninfo_all_blocks=1 00:24:11.303 --rc geninfo_unexecuted_blocks=1 00:24:11.303 00:24:11.303 ' 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:11.303 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:11.303 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:11.304 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:11.304 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:11.304 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:11.304 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:24:11.304 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:24:11.304 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:24:11.304 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:24:11.304 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:24:11.304 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@460 -- # nvmf_veth_init 00:24:11.304 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:11.304 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:11.304 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:11.304 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:11.304 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:11.304 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:11.304 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:11.304 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:11.304 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:11.304 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:11.304 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:11.304 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:11.304 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:11.304 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:11.304 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:11.304 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:11.304 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:11.304 Cannot find device "nvmf_init_br" 00:24:11.304 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@162 -- # true 00:24:11.304 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:11.304 Cannot find device "nvmf_init_br2" 00:24:11.304 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@163 -- # true 00:24:11.304 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:11.304 Cannot find device "nvmf_tgt_br" 00:24:11.304 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@164 -- # true 00:24:11.304 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:11.304 Cannot find device "nvmf_tgt_br2" 00:24:11.304 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@165 -- # true 00:24:11.304 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:11.304 Cannot find device "nvmf_init_br" 00:24:11.304 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@166 -- # true 00:24:11.304 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:11.304 Cannot find device "nvmf_init_br2" 00:24:11.304 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@167 -- # true 00:24:11.304 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:11.304 Cannot find device "nvmf_tgt_br" 00:24:11.304 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@168 -- # true 00:24:11.304 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:11.563 Cannot find device "nvmf_tgt_br2" 00:24:11.563 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@169 -- # true 00:24:11.563 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:11.563 Cannot find device "nvmf_br" 00:24:11.563 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@170 -- # true 00:24:11.563 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:11.563 Cannot find device "nvmf_init_if" 00:24:11.563 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@171 -- # true 00:24:11.563 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:11.563 Cannot find device "nvmf_init_if2" 00:24:11.563 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@172 -- # true 00:24:11.563 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:11.563 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:11.563 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@173 -- # true 00:24:11.563 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:11.563 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:11.563 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@174 -- # true 00:24:11.563 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:11.563 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:11.563 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:11.563 14:28:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:11.563 14:28:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:11.563 14:28:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:11.563 14:28:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:11.563 14:28:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:11.563 14:28:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:11.563 14:28:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:11.563 14:28:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:11.563 14:28:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:11.563 14:28:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:11.563 14:28:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:11.563 14:28:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:11.563 14:28:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:11.563 14:28:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:11.563 14:28:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:11.563 14:28:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:11.563 14:28:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:11.563 14:28:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:11.563 14:28:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:11.563 14:28:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:11.563 14:28:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:11.563 14:28:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:11.563 14:28:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:11.563 14:28:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:11.563 14:28:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:11.563 14:28:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:11.563 14:28:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:11.563 14:28:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:11.563 14:28:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:11.563 14:28:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:11.563 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:11.563 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:24:11.563 00:24:11.563 --- 10.0.0.3 ping statistics --- 00:24:11.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:11.563 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:24:11.563 14:28:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:11.563 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:11.563 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.072 ms 00:24:11.563 00:24:11.563 --- 10.0.0.4 ping statistics --- 00:24:11.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:11.563 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:24:11.563 14:28:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:11.563 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:11.563 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.014 ms 00:24:11.563 00:24:11.563 --- 10.0.0.1 ping statistics --- 00:24:11.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:11.563 rtt min/avg/max/mdev = 0.014/0.014/0.014/0.000 ms 00:24:11.563 14:28:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:11.563 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:11.563 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:24:11.563 00:24:11.563 --- 10.0.0.2 ping statistics --- 00:24:11.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:11.563 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:24:11.563 14:28:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:11.563 14:28:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@461 -- # return 0 00:24:11.563 14:28:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:11.563 14:28:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:11.563 14:28:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:11.563 14:28:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:11.563 14:28:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:11.563 14:28:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:11.563 14:28:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:11.563 14:28:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:24:11.564 14:28:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:11.564 14:28:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:11.564 14:28:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:11.564 14:28:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=94597 00:24:11.564 14:28:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 94597 00:24:11.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:11.564 14:28:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@833 -- # '[' -z 94597 ']' 00:24:11.564 14:28:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:11.564 14:28:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:11.564 14:28:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:11.564 14:28:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:11.564 14:28:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:11.564 14:28:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:11.822 [2024-11-06 14:28:39.228095] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:24:11.822 [2024-11-06 14:28:39.228187] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:11.822 [2024-11-06 14:28:39.381336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:12.081 [2024-11-06 14:28:39.484008] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:12.081 [2024-11-06 14:28:39.484051] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:12.081 [2024-11-06 14:28:39.484063] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:12.081 [2024-11-06 14:28:39.484071] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:12.081 [2024-11-06 14:28:39.484080] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:12.081 [2024-11-06 14:28:39.485472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:12.081 [2024-11-06 14:28:39.486141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:12.081 [2024-11-06 14:28:39.486218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:12.081 [2024-11-06 14:28:39.486193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:12.647 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:12.647 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@866 -- # return 0 00:24:12.647 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:12.647 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:12.647 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:12.647 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:12.647 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:12.647 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.647 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:12.647 [2024-11-06 14:28:40.122652] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:12.647 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.647 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:24:12.647 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.647 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:12.647 Malloc0 00:24:12.647 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.647 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:24:12.647 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.647 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:12.647 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.647 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:12.647 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.647 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:12.647 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.647 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:12.647 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.647 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:12.647 [2024-11-06 14:28:40.221567] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:12.647 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.647 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:24:12.647 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.647 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:12.647 [ 00:24:12.647 { 00:24:12.647 "allow_any_host": true, 00:24:12.647 "hosts": [], 00:24:12.647 "listen_addresses": [], 00:24:12.647 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:12.647 "subtype": "Discovery" 00:24:12.647 }, 00:24:12.647 { 00:24:12.647 "allow_any_host": true, 00:24:12.647 "hosts": [], 00:24:12.647 "listen_addresses": [ 00:24:12.647 { 00:24:12.647 "adrfam": "IPv4", 00:24:12.647 "traddr": "10.0.0.3", 00:24:12.647 "trsvcid": "4420", 00:24:12.647 "trtype": "TCP" 00:24:12.647 } 00:24:12.647 ], 00:24:12.647 "max_cntlid": 65519, 00:24:12.647 "max_namespaces": 2, 00:24:12.647 "min_cntlid": 1, 00:24:12.647 "model_number": "SPDK bdev Controller", 00:24:12.647 "namespaces": [ 00:24:12.647 { 00:24:12.647 "bdev_name": "Malloc0", 00:24:12.647 "name": "Malloc0", 00:24:12.647 "nguid": "50A86174640840B48EE15255A8366115", 00:24:12.647 "nsid": 1, 00:24:12.647 "uuid": "50a86174-6408-40b4-8ee1-5255a8366115" 00:24:12.647 } 00:24:12.647 ], 00:24:12.647 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:12.647 "serial_number": "SPDK00000000000001", 00:24:12.647 "subtype": "NVMe" 00:24:12.647 } 00:24:12.647 ] 00:24:12.647 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.647 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:24:12.647 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:24:12.647 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=94651 00:24:12.647 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:24:12.647 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # local i=0 00:24:12.647 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:12.647 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 0 -lt 200 ']' 00:24:12.647 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:24:12.647 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=1 00:24:12.647 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:24:12.906 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:12.906 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 1 -lt 200 ']' 00:24:12.906 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=2 00:24:12.906 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:24:12.906 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:12.906 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 2 -lt 200 ']' 00:24:12.906 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=3 00:24:12.906 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:24:13.164 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:13.165 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:13.165 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1278 -- # return 0 00:24:13.165 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:24:13.165 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.165 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:13.165 Malloc1 00:24:13.165 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.165 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:24:13.165 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.165 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:13.165 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.165 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:24:13.165 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.165 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:13.165 [ 00:24:13.165 { 00:24:13.165 "allow_any_host": true, 00:24:13.165 "hosts": [], 00:24:13.165 "listen_addresses": [], 00:24:13.165 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:13.165 "subtype": "Discovery" 00:24:13.165 }, 00:24:13.165 { 00:24:13.165 "allow_any_host": true, 00:24:13.165 "hosts": [], 00:24:13.165 "listen_addresses": [ 00:24:13.165 { 00:24:13.165 "adrfam": "IPv4", 00:24:13.165 "traddr": "10.0.0.3", 00:24:13.165 "trsvcid": "4420", 00:24:13.165 "trtype": "TCP" 00:24:13.165 } 00:24:13.165 ], 00:24:13.165 "max_cntlid": 65519, 00:24:13.165 "max_namespaces": 2, 00:24:13.165 "min_cntlid": 1, 00:24:13.165 "model_number": "SPDK bdev Controller", 00:24:13.165 "namespaces": [ 00:24:13.165 { 00:24:13.165 "bdev_name": "Malloc0", 00:24:13.165 "name": "Malloc0", 00:24:13.165 "nguid": "50A86174640840B48EE15255A8366115", 00:24:13.165 "nsid": 1, 00:24:13.165 "uuid": "50a86174-6408-40b4-8ee1-5255a8366115" 00:24:13.165 }, 00:24:13.165 { 00:24:13.165 "bdev_name": "Malloc1", 00:24:13.165 "name": "Malloc1", 00:24:13.165 "nguid": "94B96FAA4E3441F09DDF1CB61D0248F8", 00:24:13.165 "nsid": 2, 00:24:13.165 "uuid": "94b96faa-4e34-41f0-9ddf-1cb61d0248f8" 00:24:13.165 } 00:24:13.165 ], 00:24:13.165 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:13.165 "serial_number": "SPDK00000000000001", 00:24:13.165 "subtype": "NVMe" 00:24:13.165 } 00:24:13.165 ] 00:24:13.165 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.165 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 94651 00:24:13.165 Asynchronous Event Request test 00:24:13.165 Attaching to 10.0.0.3 00:24:13.165 Attached to 10.0.0.3 00:24:13.165 Registering asynchronous event callbacks... 00:24:13.165 Starting namespace attribute notice tests for all controllers... 00:24:13.165 10.0.0.3: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:24:13.165 aer_cb - Changed Namespace 00:24:13.165 Cleaning up... 00:24:13.165 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:24:13.165 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.165 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:13.423 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.423 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:24:13.423 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.423 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:13.423 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.423 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:13.423 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.423 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:13.423 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.423 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:24:13.423 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:24:13.423 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:13.423 14:28:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:24:13.423 14:28:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:13.423 14:28:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:24:13.423 14:28:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:13.423 14:28:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:13.423 rmmod nvme_tcp 00:24:13.423 rmmod nvme_fabrics 00:24:13.423 rmmod nvme_keyring 00:24:13.423 14:28:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:13.423 14:28:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:24:13.423 14:28:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:24:13.423 14:28:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 94597 ']' 00:24:13.423 14:28:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 94597 00:24:13.423 14:28:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@952 -- # '[' -z 94597 ']' 00:24:13.423 14:28:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # kill -0 94597 00:24:13.423 14:28:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # uname 00:24:13.423 14:28:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:13.681 14:28:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 94597 00:24:13.681 killing process with pid 94597 00:24:13.681 14:28:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:13.681 14:28:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:13.681 14:28:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@970 -- # echo 'killing process with pid 94597' 00:24:13.681 14:28:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@971 -- # kill 94597 00:24:13.681 14:28:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@976 -- # wait 94597 00:24:14.247 14:28:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:14.247 14:28:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:14.247 14:28:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:14.247 14:28:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:24:14.247 14:28:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:24:14.247 14:28:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:14.247 14:28:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:24:14.247 14:28:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:14.247 14:28:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:14.247 14:28:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:14.247 14:28:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:14.247 14:28:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:14.247 14:28:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:14.247 14:28:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:14.506 14:28:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:14.506 14:28:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:14.506 14:28:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:14.506 14:28:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:14.506 14:28:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:14.506 14:28:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:14.506 14:28:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:14.506 14:28:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:14.506 14:28:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:14.506 14:28:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:14.506 14:28:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:14.506 14:28:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:14.506 14:28:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@300 -- # return 0 00:24:14.506 00:24:14.506 real 0m3.363s 00:24:14.506 user 0m9.088s 00:24:14.506 sys 0m0.727s 00:24:14.506 ************************************ 00:24:14.506 END TEST nvmf_aer 00:24:14.506 14:28:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:14.506 14:28:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:14.506 ************************************ 00:24:14.506 14:28:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:14.506 14:28:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:14.506 14:28:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:14.506 14:28:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.506 ************************************ 00:24:14.506 START TEST nvmf_async_init 00:24:14.506 ************************************ 00:24:14.506 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:14.766 * Looking for test storage... 00:24:14.766 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:14.766 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:14.766 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lcov --version 00:24:14.766 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:14.766 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:14.766 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:14.766 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:14.766 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:14.766 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:24:14.766 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:24:14.766 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:24:14.766 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:24:14.766 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:24:14.766 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:24:14.766 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:24:14.766 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:14.766 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:24:14.766 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:24:14.766 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:14.766 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:14.766 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:24:14.766 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:24:14.766 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:14.766 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:24:14.766 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:24:14.766 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:24:14.766 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:24:14.766 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:14.766 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:24:14.766 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:24:14.766 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:14.766 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:14.766 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:24:14.766 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:14.766 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:14.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:14.766 --rc genhtml_branch_coverage=1 00:24:14.766 --rc genhtml_function_coverage=1 00:24:14.766 --rc genhtml_legend=1 00:24:14.766 --rc geninfo_all_blocks=1 00:24:14.766 --rc geninfo_unexecuted_blocks=1 00:24:14.766 00:24:14.766 ' 00:24:14.766 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:14.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:14.766 --rc genhtml_branch_coverage=1 00:24:14.766 --rc genhtml_function_coverage=1 00:24:14.766 --rc genhtml_legend=1 00:24:14.766 --rc geninfo_all_blocks=1 00:24:14.766 --rc geninfo_unexecuted_blocks=1 00:24:14.766 00:24:14.766 ' 00:24:14.766 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:14.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:14.766 --rc genhtml_branch_coverage=1 00:24:14.766 --rc genhtml_function_coverage=1 00:24:14.766 --rc genhtml_legend=1 00:24:14.766 --rc geninfo_all_blocks=1 00:24:14.766 --rc geninfo_unexecuted_blocks=1 00:24:14.766 00:24:14.766 ' 00:24:14.766 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:14.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:14.766 --rc genhtml_branch_coverage=1 00:24:14.766 --rc genhtml_function_coverage=1 00:24:14.766 --rc genhtml_legend=1 00:24:14.766 --rc geninfo_all_blocks=1 00:24:14.766 --rc geninfo_unexecuted_blocks=1 00:24:14.766 00:24:14.766 ' 00:24:14.766 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:14.766 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:24:14.766 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:14.766 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:14.766 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:14.766 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:14.766 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:14.766 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:14.766 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:14.766 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:14.766 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:14.766 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:14.766 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:24:14.766 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:24:14.766 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:14.766 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:14.766 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:14.766 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:14.766 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:14.766 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:24:14.766 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:14.766 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:14.766 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:14.766 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.766 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.766 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.766 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:24:14.766 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.766 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:24:14.766 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:14.766 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:14.766 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:14.766 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:14.766 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:14.766 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:14.766 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:14.767 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:14.767 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:14.767 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:14.767 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:24:14.767 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:24:14.767 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:24:14.767 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:24:14.767 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:24:14.767 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:24:14.767 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=892bd18aeb0642c599212bff92161d16 00:24:14.767 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:24:14.767 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:14.767 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:14.767 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:14.767 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:14.767 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:14.767 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:14.767 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:14.767 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:14.767 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:24:14.767 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:24:14.767 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:24:14.767 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:24:14.767 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:24:14.767 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@460 -- # nvmf_veth_init 00:24:14.767 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:14.767 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:14.767 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:14.767 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:14.767 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:14.767 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:14.767 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:14.767 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:14.767 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:14.767 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:14.767 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:14.767 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:14.767 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:14.767 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:14.767 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:14.767 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:14.767 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:14.767 Cannot find device "nvmf_init_br" 00:24:14.767 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@162 -- # true 00:24:14.767 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:14.767 Cannot find device "nvmf_init_br2" 00:24:14.767 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@163 -- # true 00:24:14.767 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:14.767 Cannot find device "nvmf_tgt_br" 00:24:14.767 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@164 -- # true 00:24:14.767 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:14.767 Cannot find device "nvmf_tgt_br2" 00:24:14.767 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@165 -- # true 00:24:14.767 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:14.767 Cannot find device "nvmf_init_br" 00:24:14.767 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@166 -- # true 00:24:14.767 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:14.767 Cannot find device "nvmf_init_br2" 00:24:14.767 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@167 -- # true 00:24:14.767 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:14.767 Cannot find device "nvmf_tgt_br" 00:24:14.767 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@168 -- # true 00:24:14.767 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:14.767 Cannot find device "nvmf_tgt_br2" 00:24:14.767 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@169 -- # true 00:24:14.767 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:14.767 Cannot find device "nvmf_br" 00:24:14.767 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@170 -- # true 00:24:14.767 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:14.767 Cannot find device "nvmf_init_if" 00:24:14.767 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@171 -- # true 00:24:14.767 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:14.767 Cannot find device "nvmf_init_if2" 00:24:14.767 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@172 -- # true 00:24:14.767 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:14.767 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:14.767 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@173 -- # true 00:24:14.767 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:14.767 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:14.767 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@174 -- # true 00:24:14.767 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:14.767 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:14.767 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:14.767 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:15.026 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:15.026 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:15.026 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:15.026 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:15.026 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:15.026 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:15.026 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:15.026 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:15.026 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:15.026 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:15.026 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:15.026 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:15.026 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:15.026 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:15.026 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:15.026 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:15.026 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:15.026 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:15.026 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:15.026 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:15.026 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:15.026 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:15.026 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:15.026 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:15.026 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:15.026 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:15.026 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:15.026 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:15.026 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:15.026 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:15.026 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:24:15.026 00:24:15.026 --- 10.0.0.3 ping statistics --- 00:24:15.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:15.026 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:24:15.026 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:15.026 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:15.026 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.062 ms 00:24:15.026 00:24:15.026 --- 10.0.0.4 ping statistics --- 00:24:15.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:15.026 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:24:15.026 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:15.026 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:15.026 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:24:15.026 00:24:15.026 --- 10.0.0.1 ping statistics --- 00:24:15.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:15.026 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:24:15.026 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:15.026 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:15.026 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:24:15.026 00:24:15.026 --- 10.0.0.2 ping statistics --- 00:24:15.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:15.026 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:24:15.026 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:15.026 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@461 -- # return 0 00:24:15.026 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:15.026 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:15.026 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:15.026 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:15.026 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:15.026 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:15.026 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:15.026 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:24:15.027 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:15.027 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:15.027 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:15.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:15.027 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=94891 00:24:15.027 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 94891 00:24:15.027 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@833 -- # '[' -z 94891 ']' 00:24:15.027 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:15.027 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:15.027 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:15.027 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:15.027 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:15.027 14:28:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:15.284 [2024-11-06 14:28:42.667870] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:24:15.284 [2024-11-06 14:28:42.668002] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:15.284 [2024-11-06 14:28:42.828543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:15.543 [2024-11-06 14:28:42.926166] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:15.543 [2024-11-06 14:28:42.926211] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:15.543 [2024-11-06 14:28:42.926223] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:15.543 [2024-11-06 14:28:42.926238] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:15.543 [2024-11-06 14:28:42.926247] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:15.543 [2024-11-06 14:28:42.927136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:16.109 14:28:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:16.109 14:28:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@866 -- # return 0 00:24:16.109 14:28:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:16.109 14:28:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:16.109 14:28:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:16.109 14:28:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:16.109 14:28:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:16.109 14:28:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.109 14:28:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:16.109 [2024-11-06 14:28:43.539574] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:16.109 14:28:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.109 14:28:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:24:16.109 14:28:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.109 14:28:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:16.109 null0 00:24:16.109 14:28:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.109 14:28:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:24:16.109 14:28:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.109 14:28:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:16.109 14:28:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.109 14:28:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:24:16.109 14:28:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.109 14:28:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:16.109 14:28:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.109 14:28:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 892bd18aeb0642c599212bff92161d16 00:24:16.109 14:28:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.109 14:28:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:16.109 14:28:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.109 14:28:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:24:16.109 14:28:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.109 14:28:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:16.109 [2024-11-06 14:28:43.579688] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:16.109 14:28:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.109 14:28:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:24:16.109 14:28:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.109 14:28:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:16.368 nvme0n1 00:24:16.368 14:28:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.368 14:28:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:16.368 14:28:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.368 14:28:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:16.368 [ 00:24:16.368 { 00:24:16.368 "aliases": [ 00:24:16.368 "892bd18a-eb06-42c5-9921-2bff92161d16" 00:24:16.368 ], 00:24:16.368 "assigned_rate_limits": { 00:24:16.368 "r_mbytes_per_sec": 0, 00:24:16.368 "rw_ios_per_sec": 0, 00:24:16.368 "rw_mbytes_per_sec": 0, 00:24:16.368 "w_mbytes_per_sec": 0 00:24:16.368 }, 00:24:16.368 "block_size": 512, 00:24:16.368 "claimed": false, 00:24:16.368 "driver_specific": { 00:24:16.368 "mp_policy": "active_passive", 00:24:16.368 "nvme": [ 00:24:16.368 { 00:24:16.368 "ctrlr_data": { 00:24:16.368 "ana_reporting": false, 00:24:16.368 "cntlid": 1, 00:24:16.368 "firmware_revision": "25.01", 00:24:16.368 "model_number": "SPDK bdev Controller", 00:24:16.368 "multi_ctrlr": true, 00:24:16.368 "oacs": { 00:24:16.368 "firmware": 0, 00:24:16.368 "format": 0, 00:24:16.368 "ns_manage": 0, 00:24:16.368 "security": 0 00:24:16.368 }, 00:24:16.368 "serial_number": "00000000000000000000", 00:24:16.368 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:16.368 "vendor_id": "0x8086" 00:24:16.368 }, 00:24:16.368 "ns_data": { 00:24:16.368 "can_share": true, 00:24:16.368 "id": 1 00:24:16.368 }, 00:24:16.368 "trid": { 00:24:16.368 "adrfam": "IPv4", 00:24:16.368 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:16.368 "traddr": "10.0.0.3", 00:24:16.368 "trsvcid": "4420", 00:24:16.368 "trtype": "TCP" 00:24:16.368 }, 00:24:16.368 "vs": { 00:24:16.368 "nvme_version": "1.3" 00:24:16.368 } 00:24:16.368 } 00:24:16.368 ] 00:24:16.368 }, 00:24:16.368 "memory_domains": [ 00:24:16.368 { 00:24:16.368 "dma_device_id": "system", 00:24:16.368 "dma_device_type": 1 00:24:16.368 } 00:24:16.368 ], 00:24:16.368 "name": "nvme0n1", 00:24:16.368 "num_blocks": 2097152, 00:24:16.368 "numa_id": -1, 00:24:16.368 "product_name": "NVMe disk", 00:24:16.368 "supported_io_types": { 00:24:16.368 "abort": true, 00:24:16.368 "compare": true, 00:24:16.368 "compare_and_write": true, 00:24:16.368 "copy": true, 00:24:16.368 "flush": true, 00:24:16.368 "get_zone_info": false, 00:24:16.368 "nvme_admin": true, 00:24:16.368 "nvme_io": true, 00:24:16.368 "nvme_io_md": false, 00:24:16.368 "nvme_iov_md": false, 00:24:16.368 "read": true, 00:24:16.368 "reset": true, 00:24:16.368 "seek_data": false, 00:24:16.368 "seek_hole": false, 00:24:16.368 "unmap": false, 00:24:16.368 "write": true, 00:24:16.368 "write_zeroes": true, 00:24:16.368 "zcopy": false, 00:24:16.368 "zone_append": false, 00:24:16.368 "zone_management": false 00:24:16.368 }, 00:24:16.368 "uuid": "892bd18a-eb06-42c5-9921-2bff92161d16", 00:24:16.368 "zoned": false 00:24:16.368 } 00:24:16.368 ] 00:24:16.368 14:28:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.368 14:28:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:24:16.368 14:28:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.368 14:28:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:16.368 [2024-11-06 14:28:43.837940] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:16.368 [2024-11-06 14:28:43.838046] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:24:16.368 [2024-11-06 14:28:43.970093] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:24:16.368 14:28:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.368 14:28:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:16.368 14:28:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.368 14:28:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:16.368 [ 00:24:16.368 { 00:24:16.368 "aliases": [ 00:24:16.368 "892bd18a-eb06-42c5-9921-2bff92161d16" 00:24:16.368 ], 00:24:16.368 "assigned_rate_limits": { 00:24:16.368 "r_mbytes_per_sec": 0, 00:24:16.368 "rw_ios_per_sec": 0, 00:24:16.368 "rw_mbytes_per_sec": 0, 00:24:16.368 "w_mbytes_per_sec": 0 00:24:16.368 }, 00:24:16.368 "block_size": 512, 00:24:16.368 "claimed": false, 00:24:16.368 "driver_specific": { 00:24:16.368 "mp_policy": "active_passive", 00:24:16.368 "nvme": [ 00:24:16.368 { 00:24:16.368 "ctrlr_data": { 00:24:16.368 "ana_reporting": false, 00:24:16.368 "cntlid": 2, 00:24:16.368 "firmware_revision": "25.01", 00:24:16.368 "model_number": "SPDK bdev Controller", 00:24:16.368 "multi_ctrlr": true, 00:24:16.368 "oacs": { 00:24:16.368 "firmware": 0, 00:24:16.368 "format": 0, 00:24:16.368 "ns_manage": 0, 00:24:16.368 "security": 0 00:24:16.368 }, 00:24:16.368 "serial_number": "00000000000000000000", 00:24:16.368 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:16.368 "vendor_id": "0x8086" 00:24:16.368 }, 00:24:16.368 "ns_data": { 00:24:16.368 "can_share": true, 00:24:16.368 "id": 1 00:24:16.368 }, 00:24:16.368 "trid": { 00:24:16.368 "adrfam": "IPv4", 00:24:16.368 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:16.368 "traddr": "10.0.0.3", 00:24:16.368 "trsvcid": "4420", 00:24:16.368 "trtype": "TCP" 00:24:16.368 }, 00:24:16.368 "vs": { 00:24:16.368 "nvme_version": "1.3" 00:24:16.368 } 00:24:16.368 } 00:24:16.368 ] 00:24:16.368 }, 00:24:16.368 "memory_domains": [ 00:24:16.368 { 00:24:16.368 "dma_device_id": "system", 00:24:16.368 "dma_device_type": 1 00:24:16.368 } 00:24:16.368 ], 00:24:16.368 "name": "nvme0n1", 00:24:16.368 "num_blocks": 2097152, 00:24:16.368 "numa_id": -1, 00:24:16.368 "product_name": "NVMe disk", 00:24:16.368 "supported_io_types": { 00:24:16.368 "abort": true, 00:24:16.368 "compare": true, 00:24:16.368 "compare_and_write": true, 00:24:16.368 "copy": true, 00:24:16.368 "flush": true, 00:24:16.368 "get_zone_info": false, 00:24:16.368 "nvme_admin": true, 00:24:16.368 "nvme_io": true, 00:24:16.368 "nvme_io_md": false, 00:24:16.368 "nvme_iov_md": false, 00:24:16.368 "read": true, 00:24:16.368 "reset": true, 00:24:16.368 "seek_data": false, 00:24:16.368 "seek_hole": false, 00:24:16.368 "unmap": false, 00:24:16.368 "write": true, 00:24:16.368 "write_zeroes": true, 00:24:16.368 "zcopy": false, 00:24:16.368 "zone_append": false, 00:24:16.368 "zone_management": false 00:24:16.368 }, 00:24:16.368 "uuid": "892bd18a-eb06-42c5-9921-2bff92161d16", 00:24:16.368 "zoned": false 00:24:16.368 } 00:24:16.368 ] 00:24:16.368 14:28:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.368 14:28:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:16.369 14:28:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.369 14:28:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:16.627 14:28:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.627 14:28:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:24:16.627 14:28:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.hdYCuyWzA2 00:24:16.627 14:28:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:16.627 14:28:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.hdYCuyWzA2 00:24:16.627 14:28:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.hdYCuyWzA2 00:24:16.627 14:28:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.627 14:28:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:16.627 14:28:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.627 14:28:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:24:16.627 14:28:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.627 14:28:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:16.627 14:28:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.627 14:28:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 --secure-channel 00:24:16.627 14:28:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.627 14:28:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:16.627 [2024-11-06 14:28:44.042137] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:16.627 [2024-11-06 14:28:44.042288] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:24:16.627 14:28:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.627 14:28:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:24:16.627 14:28:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.627 14:28:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:16.627 14:28:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.627 14:28:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:16.627 14:28:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.627 14:28:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:16.627 [2024-11-06 14:28:44.058131] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:16.627 nvme0n1 00:24:16.627 14:28:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.627 14:28:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:16.627 14:28:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.627 14:28:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:16.627 [ 00:24:16.627 { 00:24:16.627 "aliases": [ 00:24:16.627 "892bd18a-eb06-42c5-9921-2bff92161d16" 00:24:16.627 ], 00:24:16.627 "assigned_rate_limits": { 00:24:16.627 "r_mbytes_per_sec": 0, 00:24:16.627 "rw_ios_per_sec": 0, 00:24:16.627 "rw_mbytes_per_sec": 0, 00:24:16.627 "w_mbytes_per_sec": 0 00:24:16.627 }, 00:24:16.627 "block_size": 512, 00:24:16.627 "claimed": false, 00:24:16.627 "driver_specific": { 00:24:16.627 "mp_policy": "active_passive", 00:24:16.627 "nvme": [ 00:24:16.627 { 00:24:16.627 "ctrlr_data": { 00:24:16.627 "ana_reporting": false, 00:24:16.627 "cntlid": 3, 00:24:16.627 "firmware_revision": "25.01", 00:24:16.627 "model_number": "SPDK bdev Controller", 00:24:16.627 "multi_ctrlr": true, 00:24:16.627 "oacs": { 00:24:16.627 "firmware": 0, 00:24:16.627 "format": 0, 00:24:16.627 "ns_manage": 0, 00:24:16.627 "security": 0 00:24:16.627 }, 00:24:16.627 "serial_number": "00000000000000000000", 00:24:16.627 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:16.627 "vendor_id": "0x8086" 00:24:16.627 }, 00:24:16.627 "ns_data": { 00:24:16.627 "can_share": true, 00:24:16.627 "id": 1 00:24:16.627 }, 00:24:16.627 "trid": { 00:24:16.627 "adrfam": "IPv4", 00:24:16.627 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:16.627 "traddr": "10.0.0.3", 00:24:16.627 "trsvcid": "4421", 00:24:16.627 "trtype": "TCP" 00:24:16.627 }, 00:24:16.627 "vs": { 00:24:16.627 "nvme_version": "1.3" 00:24:16.627 } 00:24:16.627 } 00:24:16.627 ] 00:24:16.627 }, 00:24:16.627 "memory_domains": [ 00:24:16.627 { 00:24:16.627 "dma_device_id": "system", 00:24:16.627 "dma_device_type": 1 00:24:16.627 } 00:24:16.627 ], 00:24:16.627 "name": "nvme0n1", 00:24:16.627 "num_blocks": 2097152, 00:24:16.627 "numa_id": -1, 00:24:16.627 "product_name": "NVMe disk", 00:24:16.627 "supported_io_types": { 00:24:16.627 "abort": true, 00:24:16.627 "compare": true, 00:24:16.627 "compare_and_write": true, 00:24:16.627 "copy": true, 00:24:16.627 "flush": true, 00:24:16.627 "get_zone_info": false, 00:24:16.627 "nvme_admin": true, 00:24:16.627 "nvme_io": true, 00:24:16.627 "nvme_io_md": false, 00:24:16.627 "nvme_iov_md": false, 00:24:16.627 "read": true, 00:24:16.627 "reset": true, 00:24:16.627 "seek_data": false, 00:24:16.627 "seek_hole": false, 00:24:16.627 "unmap": false, 00:24:16.627 "write": true, 00:24:16.627 "write_zeroes": true, 00:24:16.627 "zcopy": false, 00:24:16.627 "zone_append": false, 00:24:16.627 "zone_management": false 00:24:16.627 }, 00:24:16.628 "uuid": "892bd18a-eb06-42c5-9921-2bff92161d16", 00:24:16.628 "zoned": false 00:24:16.628 } 00:24:16.628 ] 00:24:16.628 14:28:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.628 14:28:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:16.628 14:28:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.628 14:28:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:16.628 14:28:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.628 14:28:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.hdYCuyWzA2 00:24:16.628 14:28:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:24:16.628 14:28:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:24:16.628 14:28:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:16.628 14:28:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:24:16.628 14:28:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:16.628 14:28:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:24:16.628 14:28:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:16.628 14:28:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:16.628 rmmod nvme_tcp 00:24:16.628 rmmod nvme_fabrics 00:24:16.628 rmmod nvme_keyring 00:24:16.628 14:28:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:16.628 14:28:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:24:16.628 14:28:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:24:16.628 14:28:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 94891 ']' 00:24:16.628 14:28:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 94891 00:24:16.628 14:28:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@952 -- # '[' -z 94891 ']' 00:24:16.628 14:28:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # kill -0 94891 00:24:16.628 14:28:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # uname 00:24:16.885 14:28:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:16.885 14:28:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 94891 00:24:16.885 14:28:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:16.885 killing process with pid 94891 00:24:16.885 14:28:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:16.885 14:28:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 94891' 00:24:16.885 14:28:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@971 -- # kill 94891 00:24:16.885 14:28:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@976 -- # wait 94891 00:24:17.452 14:28:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:17.452 14:28:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:17.452 14:28:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:17.452 14:28:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:24:17.452 14:28:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:24:17.452 14:28:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:17.452 14:28:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:24:17.452 14:28:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:17.452 14:28:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:17.452 14:28:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:17.452 14:28:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:17.452 14:28:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:17.452 14:28:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:17.452 14:28:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:17.452 14:28:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:17.452 14:28:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:17.452 14:28:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:17.452 14:28:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:17.712 14:28:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:17.712 14:28:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:17.712 14:28:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:17.712 14:28:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:17.712 14:28:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:17.712 14:28:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:17.712 14:28:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:17.712 14:28:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:17.712 14:28:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@300 -- # return 0 00:24:17.712 00:24:17.712 real 0m3.132s 00:24:17.712 user 0m2.685s 00:24:17.712 sys 0m0.631s 00:24:17.712 ************************************ 00:24:17.712 END TEST nvmf_async_init 00:24:17.712 ************************************ 00:24:17.712 14:28:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:17.712 14:28:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:17.712 14:28:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:17.712 14:28:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:17.712 14:28:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:17.712 14:28:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.712 ************************************ 00:24:17.712 START TEST dma 00:24:17.712 ************************************ 00:24:17.712 14:28:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:17.712 * Looking for test storage... 00:24:17.712 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:17.712 14:28:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:17.712 14:28:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lcov --version 00:24:17.712 14:28:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:17.971 14:28:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:17.971 14:28:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:17.971 14:28:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:17.971 14:28:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:17.971 14:28:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:24:17.971 14:28:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:24:17.971 14:28:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:24:17.971 14:28:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:24:17.971 14:28:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:24:17.971 14:28:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:24:17.971 14:28:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:24:17.971 14:28:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:17.971 14:28:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:24:17.971 14:28:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:24:17.971 14:28:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:17.971 14:28:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:17.971 14:28:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:24:17.971 14:28:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:24:17.971 14:28:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:17.971 14:28:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:24:17.971 14:28:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:24:17.971 14:28:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:24:17.971 14:28:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:24:17.971 14:28:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:17.971 14:28:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:24:17.971 14:28:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:24:17.971 14:28:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:17.971 14:28:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:17.971 14:28:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:24:17.971 14:28:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:17.971 14:28:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:17.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:17.971 --rc genhtml_branch_coverage=1 00:24:17.971 --rc genhtml_function_coverage=1 00:24:17.971 --rc genhtml_legend=1 00:24:17.971 --rc geninfo_all_blocks=1 00:24:17.971 --rc geninfo_unexecuted_blocks=1 00:24:17.971 00:24:17.971 ' 00:24:17.971 14:28:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:17.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:17.971 --rc genhtml_branch_coverage=1 00:24:17.971 --rc genhtml_function_coverage=1 00:24:17.971 --rc genhtml_legend=1 00:24:17.971 --rc geninfo_all_blocks=1 00:24:17.971 --rc geninfo_unexecuted_blocks=1 00:24:17.971 00:24:17.971 ' 00:24:17.971 14:28:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:17.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:17.971 --rc genhtml_branch_coverage=1 00:24:17.971 --rc genhtml_function_coverage=1 00:24:17.971 --rc genhtml_legend=1 00:24:17.971 --rc geninfo_all_blocks=1 00:24:17.971 --rc geninfo_unexecuted_blocks=1 00:24:17.971 00:24:17.971 ' 00:24:17.971 14:28:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:17.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:17.971 --rc genhtml_branch_coverage=1 00:24:17.971 --rc genhtml_function_coverage=1 00:24:17.971 --rc genhtml_legend=1 00:24:17.971 --rc geninfo_all_blocks=1 00:24:17.971 --rc geninfo_unexecuted_blocks=1 00:24:17.971 00:24:17.971 ' 00:24:17.971 14:28:45 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:17.971 14:28:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:24:17.971 14:28:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:17.971 14:28:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:17.971 14:28:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:17.971 14:28:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:17.971 14:28:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:17.971 14:28:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:17.971 14:28:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:17.971 14:28:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:17.971 14:28:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:17.971 14:28:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:17.971 14:28:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:24:17.971 14:28:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:24:17.971 14:28:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:17.971 14:28:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:17.971 14:28:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:17.971 14:28:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:17.971 14:28:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:17.971 14:28:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:24:17.971 14:28:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:17.971 14:28:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:17.971 14:28:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:17.971 14:28:45 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.971 14:28:45 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.971 14:28:45 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.971 14:28:45 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:24:17.971 14:28:45 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.971 14:28:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:24:17.971 14:28:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:17.971 14:28:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:17.971 14:28:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:17.971 14:28:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:17.971 14:28:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:17.971 14:28:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:17.971 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:17.971 14:28:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:17.971 14:28:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:17.971 14:28:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:17.971 14:28:45 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:24:17.971 14:28:45 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:24:17.971 ************************************ 00:24:17.971 END TEST dma 00:24:17.971 ************************************ 00:24:17.971 00:24:17.971 real 0m0.156s 00:24:17.971 user 0m0.096s 00:24:17.971 sys 0m0.064s 00:24:17.971 14:28:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:17.971 14:28:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:24:17.971 14:28:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:17.971 14:28:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:17.972 14:28:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:17.972 14:28:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.972 ************************************ 00:24:17.972 START TEST nvmf_identify 00:24:17.972 ************************************ 00:24:17.972 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:17.972 * Looking for test storage... 00:24:17.972 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:17.972 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:17.972 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lcov --version 00:24:17.972 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:17.972 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:17.972 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:17.972 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:17.972 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:17.972 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:24:17.972 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:24:17.972 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:24:17.972 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:24:17.972 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:24:17.972 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:24:17.972 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:24:17.972 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:17.972 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:24:17.972 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:24:17.972 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:17.972 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:17.972 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:24:17.972 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:24:17.972 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:17.972 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:24:17.972 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:24:17.972 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:24:17.972 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:24:17.972 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:17.972 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:24:17.972 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:24:17.972 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:17.972 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:17.972 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:24:17.972 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:17.972 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:17.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:17.972 --rc genhtml_branch_coverage=1 00:24:17.972 --rc genhtml_function_coverage=1 00:24:17.972 --rc genhtml_legend=1 00:24:17.972 --rc geninfo_all_blocks=1 00:24:17.972 --rc geninfo_unexecuted_blocks=1 00:24:17.972 00:24:17.972 ' 00:24:17.972 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:17.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:17.972 --rc genhtml_branch_coverage=1 00:24:17.972 --rc genhtml_function_coverage=1 00:24:17.972 --rc genhtml_legend=1 00:24:17.972 --rc geninfo_all_blocks=1 00:24:17.972 --rc geninfo_unexecuted_blocks=1 00:24:17.972 00:24:17.972 ' 00:24:17.972 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:17.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:17.972 --rc genhtml_branch_coverage=1 00:24:17.972 --rc genhtml_function_coverage=1 00:24:17.972 --rc genhtml_legend=1 00:24:17.972 --rc geninfo_all_blocks=1 00:24:17.972 --rc geninfo_unexecuted_blocks=1 00:24:17.972 00:24:17.972 ' 00:24:17.972 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:17.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:17.972 --rc genhtml_branch_coverage=1 00:24:17.972 --rc genhtml_function_coverage=1 00:24:17.972 --rc genhtml_legend=1 00:24:17.972 --rc geninfo_all_blocks=1 00:24:17.972 --rc geninfo_unexecuted_blocks=1 00:24:17.972 00:24:17.972 ' 00:24:17.972 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:17.972 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:24:17.972 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:17.972 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:17.972 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:17.972 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:17.972 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:17.972 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:17.972 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:17.972 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:17.972 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:17.972 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:17.972 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:24:17.972 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:24:17.972 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:17.972 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:17.972 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:17.972 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:17.972 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:17.972 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:18.242 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@460 -- # nvmf_veth_init 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:18.242 Cannot find device "nvmf_init_br" 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:18.242 Cannot find device "nvmf_init_br2" 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:18.242 Cannot find device "nvmf_tgt_br" 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:18.242 Cannot find device "nvmf_tgt_br2" 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:18.242 Cannot find device "nvmf_init_br" 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:18.242 Cannot find device "nvmf_init_br2" 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:18.242 Cannot find device "nvmf_tgt_br" 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:18.242 Cannot find device "nvmf_tgt_br2" 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:18.242 Cannot find device "nvmf_br" 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:18.242 Cannot find device "nvmf_init_if" 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:18.242 Cannot find device "nvmf_init_if2" 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:18.242 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:18.242 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:18.242 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:18.243 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:18.243 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:18.243 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:18.243 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:18.243 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:18.243 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:18.243 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:18.243 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:18.243 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:18.243 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:18.243 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:18.243 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:18.243 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:18.243 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:18.243 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:18.243 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:18.243 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:18.243 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:18.243 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:18.243 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:18.243 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:18.243 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:18.243 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:18.243 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:18.243 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:18.243 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:18.243 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:18.243 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:24:18.243 00:24:18.243 --- 10.0.0.3 ping statistics --- 00:24:18.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:18.243 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:24:18.243 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:18.501 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:18.501 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:24:18.501 00:24:18.501 --- 10.0.0.4 ping statistics --- 00:24:18.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:18.501 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:24:18.501 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:18.501 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:18.501 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:24:18.501 00:24:18.501 --- 10.0.0.1 ping statistics --- 00:24:18.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:18.501 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:24:18.501 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:18.501 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:18.501 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.036 ms 00:24:18.501 00:24:18.501 --- 10.0.0.2 ping statistics --- 00:24:18.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:18.501 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:24:18.501 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:18.501 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@461 -- # return 0 00:24:18.501 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:18.501 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:18.501 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:18.501 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:18.501 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:18.501 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:18.501 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:18.501 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:24:18.501 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:18.501 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:18.501 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=95220 00:24:18.501 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:18.501 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:18.501 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 95220 00:24:18.501 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@833 -- # '[' -z 95220 ']' 00:24:18.501 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:18.501 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:18.501 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:18.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:18.501 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:18.501 14:28:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:18.501 [2024-11-06 14:28:45.981827] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:24:18.501 [2024-11-06 14:28:45.981942] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:18.758 [2024-11-06 14:28:46.141998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:18.758 [2024-11-06 14:28:46.242051] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:18.758 [2024-11-06 14:28:46.242098] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:18.758 [2024-11-06 14:28:46.242110] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:18.758 [2024-11-06 14:28:46.242118] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:18.758 [2024-11-06 14:28:46.242127] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:18.758 [2024-11-06 14:28:46.243666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:18.758 [2024-11-06 14:28:46.243896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:18.758 [2024-11-06 14:28:46.244176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:18.758 [2024-11-06 14:28:46.244373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:19.328 14:28:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:19.328 14:28:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@866 -- # return 0 00:24:19.328 14:28:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:19.328 14:28:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.328 14:28:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:19.328 [2024-11-06 14:28:46.797306] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:19.328 14:28:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.328 14:28:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:24:19.328 14:28:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:19.328 14:28:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:19.328 14:28:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:19.328 14:28:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.328 14:28:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:19.328 Malloc0 00:24:19.328 14:28:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.328 14:28:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:19.328 14:28:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.328 14:28:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:19.328 14:28:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.328 14:28:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:24:19.329 14:28:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.329 14:28:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:19.329 14:28:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.329 14:28:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:19.329 14:28:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.329 14:28:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:19.329 [2024-11-06 14:28:46.919337] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:19.329 14:28:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.329 14:28:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:24:19.329 14:28:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.329 14:28:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:19.329 14:28:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.329 14:28:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:24:19.329 14:28:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.329 14:28:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:19.329 [ 00:24:19.329 { 00:24:19.329 "allow_any_host": true, 00:24:19.329 "hosts": [], 00:24:19.329 "listen_addresses": [ 00:24:19.329 { 00:24:19.329 "adrfam": "IPv4", 00:24:19.329 "traddr": "10.0.0.3", 00:24:19.329 "trsvcid": "4420", 00:24:19.329 "trtype": "TCP" 00:24:19.329 } 00:24:19.329 ], 00:24:19.329 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:19.329 "subtype": "Discovery" 00:24:19.329 }, 00:24:19.329 { 00:24:19.329 "allow_any_host": true, 00:24:19.329 "hosts": [], 00:24:19.329 "listen_addresses": [ 00:24:19.329 { 00:24:19.329 "adrfam": "IPv4", 00:24:19.329 "traddr": "10.0.0.3", 00:24:19.329 "trsvcid": "4420", 00:24:19.329 "trtype": "TCP" 00:24:19.329 } 00:24:19.329 ], 00:24:19.329 "max_cntlid": 65519, 00:24:19.329 "max_namespaces": 32, 00:24:19.329 "min_cntlid": 1, 00:24:19.329 "model_number": "SPDK bdev Controller", 00:24:19.329 "namespaces": [ 00:24:19.329 { 00:24:19.329 "bdev_name": "Malloc0", 00:24:19.329 "eui64": "ABCDEF0123456789", 00:24:19.329 "name": "Malloc0", 00:24:19.329 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:24:19.329 "nsid": 1, 00:24:19.329 "uuid": "dca0a120-174d-4000-b920-d8e1a5e9b58a" 00:24:19.329 } 00:24:19.329 ], 00:24:19.329 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:19.329 "serial_number": "SPDK00000000000001", 00:24:19.329 "subtype": "NVMe" 00:24:19.329 } 00:24:19.329 ] 00:24:19.329 14:28:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.329 14:28:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:24:19.591 [2024-11-06 14:28:46.982707] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:24:19.591 [2024-11-06 14:28:46.982795] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95273 ] 00:24:19.591 [2024-11-06 14:28:47.150391] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:24:19.591 [2024-11-06 14:28:47.150492] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:19.591 [2024-11-06 14:28:47.150500] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:19.591 [2024-11-06 14:28:47.150523] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:19.591 [2024-11-06 14:28:47.150535] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:19.591 [2024-11-06 14:28:47.150831] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:24:19.591 [2024-11-06 14:28:47.150891] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x61500000f080 0 00:24:19.591 [2024-11-06 14:28:47.155967] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:19.591 [2024-11-06 14:28:47.155999] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:19.591 [2024-11-06 14:28:47.156006] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:19.591 [2024-11-06 14:28:47.156014] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:19.591 [2024-11-06 14:28:47.156081] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.591 [2024-11-06 14:28:47.156094] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.591 [2024-11-06 14:28:47.156100] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:24:19.591 [2024-11-06 14:28:47.156122] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:19.591 [2024-11-06 14:28:47.156155] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:24:19.591 [2024-11-06 14:28:47.163972] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.591 [2024-11-06 14:28:47.164000] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.591 [2024-11-06 14:28:47.164007] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.591 [2024-11-06 14:28:47.164014] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:24:19.591 [2024-11-06 14:28:47.164029] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:19.591 [2024-11-06 14:28:47.164044] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:24:19.591 [2024-11-06 14:28:47.164052] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:24:19.591 [2024-11-06 14:28:47.164068] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.591 [2024-11-06 14:28:47.164074] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.591 [2024-11-06 14:28:47.164079] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:24:19.591 [2024-11-06 14:28:47.164099] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.591 [2024-11-06 14:28:47.164126] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:24:19.591 [2024-11-06 14:28:47.164211] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.591 [2024-11-06 14:28:47.164222] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.591 [2024-11-06 14:28:47.164228] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.591 [2024-11-06 14:28:47.164234] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:24:19.591 [2024-11-06 14:28:47.164243] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:24:19.591 [2024-11-06 14:28:47.164252] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:24:19.591 [2024-11-06 14:28:47.164261] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.591 [2024-11-06 14:28:47.164266] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.591 [2024-11-06 14:28:47.164271] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:24:19.591 [2024-11-06 14:28:47.164284] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.591 [2024-11-06 14:28:47.164301] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:24:19.591 [2024-11-06 14:28:47.164368] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.591 [2024-11-06 14:28:47.164376] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.591 [2024-11-06 14:28:47.164380] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.591 [2024-11-06 14:28:47.164385] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:24:19.591 [2024-11-06 14:28:47.164392] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:24:19.591 [2024-11-06 14:28:47.164404] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:24:19.591 [2024-11-06 14:28:47.164413] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.591 [2024-11-06 14:28:47.164420] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.591 [2024-11-06 14:28:47.164425] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:24:19.591 [2024-11-06 14:28:47.164435] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.591 [2024-11-06 14:28:47.164451] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:24:19.591 [2024-11-06 14:28:47.164498] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.591 [2024-11-06 14:28:47.164505] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.591 [2024-11-06 14:28:47.164511] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.591 [2024-11-06 14:28:47.164516] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:24:19.591 [2024-11-06 14:28:47.164523] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:19.591 [2024-11-06 14:28:47.164534] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.591 [2024-11-06 14:28:47.164540] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.591 [2024-11-06 14:28:47.164548] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:24:19.591 [2024-11-06 14:28:47.164557] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.591 [2024-11-06 14:28:47.164572] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:24:19.591 [2024-11-06 14:28:47.164623] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.591 [2024-11-06 14:28:47.164630] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.591 [2024-11-06 14:28:47.164634] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.591 [2024-11-06 14:28:47.164639] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:24:19.591 [2024-11-06 14:28:47.164645] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:24:19.591 [2024-11-06 14:28:47.164652] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:24:19.591 [2024-11-06 14:28:47.164661] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:19.591 [2024-11-06 14:28:47.164768] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:24:19.591 [2024-11-06 14:28:47.164781] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:19.591 [2024-11-06 14:28:47.164791] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.591 [2024-11-06 14:28:47.164797] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.591 [2024-11-06 14:28:47.164806] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:24:19.591 [2024-11-06 14:28:47.164815] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.591 [2024-11-06 14:28:47.164831] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:24:19.591 [2024-11-06 14:28:47.164885] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.591 [2024-11-06 14:28:47.164892] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.591 [2024-11-06 14:28:47.164896] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.591 [2024-11-06 14:28:47.164903] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:24:19.591 [2024-11-06 14:28:47.164909] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:19.591 [2024-11-06 14:28:47.164920] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.591 [2024-11-06 14:28:47.164925] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.591 [2024-11-06 14:28:47.164931] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:24:19.591 [2024-11-06 14:28:47.164944] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.591 [2024-11-06 14:28:47.164975] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:24:19.591 [2024-11-06 14:28:47.165038] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.591 [2024-11-06 14:28:47.165045] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.591 [2024-11-06 14:28:47.165049] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.591 [2024-11-06 14:28:47.165054] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:24:19.591 [2024-11-06 14:28:47.165060] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:19.591 [2024-11-06 14:28:47.165069] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:24:19.591 [2024-11-06 14:28:47.165082] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:24:19.591 [2024-11-06 14:28:47.165090] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:24:19.591 [2024-11-06 14:28:47.165104] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.592 [2024-11-06 14:28:47.165113] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:24:19.592 [2024-11-06 14:28:47.165124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.592 [2024-11-06 14:28:47.165140] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:24:19.592 [2024-11-06 14:28:47.165241] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:19.592 [2024-11-06 14:28:47.165249] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:19.592 [2024-11-06 14:28:47.165253] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:19.592 [2024-11-06 14:28:47.165259] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=0 00:24:19.592 [2024-11-06 14:28:47.165265] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:24:19.592 [2024-11-06 14:28:47.165271] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.592 [2024-11-06 14:28:47.165283] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:19.592 [2024-11-06 14:28:47.165291] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:19.592 [2024-11-06 14:28:47.165303] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.592 [2024-11-06 14:28:47.165310] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.592 [2024-11-06 14:28:47.165314] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.592 [2024-11-06 14:28:47.165318] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:24:19.592 [2024-11-06 14:28:47.165331] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:24:19.592 [2024-11-06 14:28:47.165338] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:24:19.592 [2024-11-06 14:28:47.165344] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:24:19.592 [2024-11-06 14:28:47.165351] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:24:19.592 [2024-11-06 14:28:47.165358] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:24:19.592 [2024-11-06 14:28:47.165364] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:24:19.592 [2024-11-06 14:28:47.165374] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:24:19.592 [2024-11-06 14:28:47.165382] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.592 [2024-11-06 14:28:47.165390] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.592 [2024-11-06 14:28:47.165397] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:24:19.592 [2024-11-06 14:28:47.165407] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:19.592 [2024-11-06 14:28:47.165423] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:24:19.592 [2024-11-06 14:28:47.165479] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.592 [2024-11-06 14:28:47.165486] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.592 [2024-11-06 14:28:47.165492] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.592 [2024-11-06 14:28:47.165497] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:24:19.592 [2024-11-06 14:28:47.165507] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.592 [2024-11-06 14:28:47.165512] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.592 [2024-11-06 14:28:47.165517] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:24:19.592 [2024-11-06 14:28:47.165528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:19.592 [2024-11-06 14:28:47.165536] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.592 [2024-11-06 14:28:47.165541] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.592 [2024-11-06 14:28:47.165545] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x61500000f080) 00:24:19.592 [2024-11-06 14:28:47.165553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:19.592 [2024-11-06 14:28:47.165559] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.592 [2024-11-06 14:28:47.165564] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.592 [2024-11-06 14:28:47.165570] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x61500000f080) 00:24:19.592 [2024-11-06 14:28:47.165578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:19.592 [2024-11-06 14:28:47.165584] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.592 [2024-11-06 14:28:47.165589] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.592 [2024-11-06 14:28:47.165593] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:24:19.592 [2024-11-06 14:28:47.165600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:19.592 [2024-11-06 14:28:47.165606] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:19.592 [2024-11-06 14:28:47.165623] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:19.592 [2024-11-06 14:28:47.165633] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.592 [2024-11-06 14:28:47.165640] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:24:19.592 [2024-11-06 14:28:47.165650] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.592 [2024-11-06 14:28:47.165669] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:24:19.592 [2024-11-06 14:28:47.165675] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:24:19.592 [2024-11-06 14:28:47.165682] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:24:19.592 [2024-11-06 14:28:47.165688] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:24:19.592 [2024-11-06 14:28:47.165694] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:24:19.592 [2024-11-06 14:28:47.165786] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.592 [2024-11-06 14:28:47.165793] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.592 [2024-11-06 14:28:47.165798] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.592 [2024-11-06 14:28:47.165802] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:24:19.592 [2024-11-06 14:28:47.165809] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:24:19.592 [2024-11-06 14:28:47.165816] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:24:19.592 [2024-11-06 14:28:47.165832] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.592 [2024-11-06 14:28:47.165837] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:24:19.592 [2024-11-06 14:28:47.165846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.592 [2024-11-06 14:28:47.165861] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:24:19.592 [2024-11-06 14:28:47.165925] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:19.592 [2024-11-06 14:28:47.165932] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:19.592 [2024-11-06 14:28:47.165937] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:19.592 [2024-11-06 14:28:47.165943] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:24:19.592 [2024-11-06 14:28:47.165960] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:24:19.592 [2024-11-06 14:28:47.165968] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.592 [2024-11-06 14:28:47.165977] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:19.592 [2024-11-06 14:28:47.165982] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:19.592 [2024-11-06 14:28:47.165991] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.592 [2024-11-06 14:28:47.165998] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.592 [2024-11-06 14:28:47.166002] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.592 [2024-11-06 14:28:47.166007] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:24:19.592 [2024-11-06 14:28:47.166028] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:24:19.592 [2024-11-06 14:28:47.166067] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.592 [2024-11-06 14:28:47.166076] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:24:19.592 [2024-11-06 14:28:47.166085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.592 [2024-11-06 14:28:47.166094] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.592 [2024-11-06 14:28:47.166099] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.592 [2024-11-06 14:28:47.166104] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:24:19.592 [2024-11-06 14:28:47.166114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:19.592 [2024-11-06 14:28:47.166133] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:24:19.592 [2024-11-06 14:28:47.166140] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:24:19.592 [2024-11-06 14:28:47.166353] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:19.592 [2024-11-06 14:28:47.166369] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:19.592 [2024-11-06 14:28:47.166373] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:19.592 [2024-11-06 14:28:47.166379] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=1024, cccid=4 00:24:19.592 [2024-11-06 14:28:47.166385] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=1024 00:24:19.592 [2024-11-06 14:28:47.166390] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.592 [2024-11-06 14:28:47.166399] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:19.592 [2024-11-06 14:28:47.166404] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:19.592 [2024-11-06 14:28:47.166411] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.592 [2024-11-06 14:28:47.166421] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.592 [2024-11-06 14:28:47.166425] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.593 [2024-11-06 14:28:47.166431] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:24:19.593 [2024-11-06 14:28:47.207014] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.593 [2024-11-06 14:28:47.207038] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.593 [2024-11-06 14:28:47.207043] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.593 [2024-11-06 14:28:47.207049] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:24:19.593 [2024-11-06 14:28:47.207078] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.593 [2024-11-06 14:28:47.207084] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:24:19.593 [2024-11-06 14:28:47.207099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.593 [2024-11-06 14:28:47.207122] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:24:19.593 [2024-11-06 14:28:47.207210] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:19.593 [2024-11-06 14:28:47.207218] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:19.593 [2024-11-06 14:28:47.207222] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:19.593 [2024-11-06 14:28:47.207227] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=3072, cccid=4 00:24:19.593 [2024-11-06 14:28:47.207232] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=3072 00:24:19.593 [2024-11-06 14:28:47.207237] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.593 [2024-11-06 14:28:47.207245] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:19.593 [2024-11-06 14:28:47.207250] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:19.593 [2024-11-06 14:28:47.207258] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.593 [2024-11-06 14:28:47.207265] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.593 [2024-11-06 14:28:47.207269] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.593 [2024-11-06 14:28:47.207274] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:24:19.593 [2024-11-06 14:28:47.207287] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.593 [2024-11-06 14:28:47.207292] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:24:19.593 [2024-11-06 14:28:47.207301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.593 [2024-11-06 14:28:47.207320] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:24:19.593 [2024-11-06 14:28:47.207396] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:19.593 [2024-11-06 14:28:47.207404] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:19.593 [2024-11-06 14:28:47.207408] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:19.593 [2024-11-06 14:28:47.207413] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=8, cccid=4 00:24:19.593 [2024-11-06 14:28:47.207418] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=8 00:24:19.593 [2024-11-06 14:28:47.207423] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.593 [2024-11-06 14:28:47.207431] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:19.593 [2024-11-06 14:28:47.207435] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:19.858 ===================================================== 00:24:19.858 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:19.858 ===================================================== 00:24:19.858 Controller Capabilities/Features 00:24:19.858 ================================ 00:24:19.858 Vendor ID: 0000 00:24:19.858 Subsystem Vendor ID: 0000 00:24:19.858 Serial Number: .................... 00:24:19.858 Model Number: ........................................ 00:24:19.858 Firmware Version: 25.01 00:24:19.858 Recommended Arb Burst: 0 00:24:19.858 IEEE OUI Identifier: 00 00 00 00:24:19.858 Multi-path I/O 00:24:19.858 May have multiple subsystem ports: No 00:24:19.858 May have multiple controllers: No 00:24:19.858 Associated with SR-IOV VF: No 00:24:19.858 Max Data Transfer Size: 131072 00:24:19.858 Max Number of Namespaces: 0 00:24:19.858 Max Number of I/O Queues: 1024 00:24:19.858 NVMe Specification Version (VS): 1.3 00:24:19.858 NVMe Specification Version (Identify): 1.3 00:24:19.858 Maximum Queue Entries: 128 00:24:19.858 Contiguous Queues Required: Yes 00:24:19.858 Arbitration Mechanisms Supported 00:24:19.858 Weighted Round Robin: Not Supported 00:24:19.858 Vendor Specific: Not Supported 00:24:19.858 Reset Timeout: 15000 ms 00:24:19.858 Doorbell Stride: 4 bytes 00:24:19.858 NVM Subsystem Reset: Not Supported 00:24:19.858 Command Sets Supported 00:24:19.858 NVM Command Set: Supported 00:24:19.858 Boot Partition: Not Supported 00:24:19.858 Memory Page Size Minimum: 4096 bytes 00:24:19.858 Memory Page Size Maximum: 4096 bytes 00:24:19.858 Persistent Memory Region: Not Supported 00:24:19.858 Optional Asynchronous Events Supported 00:24:19.858 Namespace Attribute Notices: Not Supported 00:24:19.858 Firmware Activation Notices: Not Supported 00:24:19.858 ANA Change Notices: Not Supported 00:24:19.858 PLE Aggregate Log Change Notices: Not Supported 00:24:19.858 LBA Status Info Alert Notices: Not Supported 00:24:19.858 EGE Aggregate Log Change Notices: Not Supported 00:24:19.858 Normal NVM Subsystem Shutdown event: Not Supported 00:24:19.858 Zone Descriptor Change Notices: Not Supported 00:24:19.858 Discovery Log Change Notices: Supported 00:24:19.858 Controller Attributes 00:24:19.858 128-bit Host Identifier: Not Supported 00:24:19.858 Non-Operational Permissive Mode: Not Supported 00:24:19.858 NVM Sets: Not Supported 00:24:19.858 Read Recovery Levels: Not Supported 00:24:19.858 Endurance Groups: Not Supported 00:24:19.858 Predictable Latency Mode: Not Supported 00:24:19.858 Traffic Based Keep ALive: Not Supported 00:24:19.858 Namespace Granularity: Not Supported 00:24:19.858 SQ Associations: Not Supported 00:24:19.858 UUID List: Not Supported 00:24:19.858 Multi-Domain Subsystem: Not Supported 00:24:19.858 Fixed Capacity Management: Not Supported 00:24:19.858 Variable Capacity Management: Not Supported 00:24:19.858 Delete Endurance Group: Not Supported 00:24:19.858 Delete NVM Set: Not Supported 00:24:19.858 Extended LBA Formats Supported: Not Supported 00:24:19.858 Flexible Data Placement Supported: Not Supported 00:24:19.858 00:24:19.858 Controller Memory Buffer Support 00:24:19.858 ================================ 00:24:19.858 Supported: No 00:24:19.858 00:24:19.858 Persistent Memory Region Support 00:24:19.858 ================================ 00:24:19.858 Supported: No 00:24:19.858 00:24:19.858 Admin Command Set Attributes 00:24:19.858 ============================ 00:24:19.858 Security Send/Receive: Not Supported 00:24:19.858 Format NVM: Not Supported 00:24:19.858 Firmware Activate/Download: Not Supported 00:24:19.858 Namespace Management: Not Supported 00:24:19.858 Device Self-Test: Not Supported 00:24:19.858 Directives: Not Supported 00:24:19.858 NVMe-MI: Not Supported 00:24:19.858 Virtualization Management: Not Supported 00:24:19.858 Doorbell Buffer Config: Not Supported 00:24:19.858 Get LBA Status Capability: Not Supported 00:24:19.858 Command & Feature Lockdown Capability: Not Supported 00:24:19.858 Abort Command Limit: 1 00:24:19.858 Async Event Request Limit: 4 00:24:19.858 Number of Firmware Slots: N/A 00:24:19.858 Firmware Slot 1 Read-Only: N/A 00:24:19.858 Firm[2024-11-06 14:28:47.251974] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.858 [2024-11-06 14:28:47.251996] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.858 [2024-11-06 14:28:47.252000] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.858 [2024-11-06 14:28:47.252006] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:24:19.858 ware Activation Without Reset: N/A 00:24:19.858 Multiple Update Detection Support: N/A 00:24:19.858 Firmware Update Granularity: No Information Provided 00:24:19.858 Per-Namespace SMART Log: No 00:24:19.858 Asymmetric Namespace Access Log Page: Not Supported 00:24:19.858 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:19.858 Command Effects Log Page: Not Supported 00:24:19.858 Get Log Page Extended Data: Supported 00:24:19.858 Telemetry Log Pages: Not Supported 00:24:19.858 Persistent Event Log Pages: Not Supported 00:24:19.858 Supported Log Pages Log Page: May Support 00:24:19.858 Commands Supported & Effects Log Page: Not Supported 00:24:19.859 Feature Identifiers & Effects Log Page:May Support 00:24:19.859 NVMe-MI Commands & Effects Log Page: May Support 00:24:19.859 Data Area 4 for Telemetry Log: Not Supported 00:24:19.859 Error Log Page Entries Supported: 128 00:24:19.859 Keep Alive: Not Supported 00:24:19.859 00:24:19.859 NVM Command Set Attributes 00:24:19.859 ========================== 00:24:19.859 Submission Queue Entry Size 00:24:19.859 Max: 1 00:24:19.859 Min: 1 00:24:19.859 Completion Queue Entry Size 00:24:19.859 Max: 1 00:24:19.859 Min: 1 00:24:19.859 Number of Namespaces: 0 00:24:19.859 Compare Command: Not Supported 00:24:19.859 Write Uncorrectable Command: Not Supported 00:24:19.859 Dataset Management Command: Not Supported 00:24:19.859 Write Zeroes Command: Not Supported 00:24:19.859 Set Features Save Field: Not Supported 00:24:19.859 Reservations: Not Supported 00:24:19.859 Timestamp: Not Supported 00:24:19.859 Copy: Not Supported 00:24:19.859 Volatile Write Cache: Not Present 00:24:19.859 Atomic Write Unit (Normal): 1 00:24:19.859 Atomic Write Unit (PFail): 1 00:24:19.859 Atomic Compare & Write Unit: 1 00:24:19.859 Fused Compare & Write: Supported 00:24:19.859 Scatter-Gather List 00:24:19.859 SGL Command Set: Supported 00:24:19.859 SGL Keyed: Supported 00:24:19.859 SGL Bit Bucket Descriptor: Not Supported 00:24:19.859 SGL Metadata Pointer: Not Supported 00:24:19.859 Oversized SGL: Not Supported 00:24:19.859 SGL Metadata Address: Not Supported 00:24:19.859 SGL Offset: Supported 00:24:19.859 Transport SGL Data Block: Not Supported 00:24:19.859 Replay Protected Memory Block: Not Supported 00:24:19.859 00:24:19.859 Firmware Slot Information 00:24:19.859 ========================= 00:24:19.859 Active slot: 0 00:24:19.859 00:24:19.859 00:24:19.859 Error Log 00:24:19.859 ========= 00:24:19.859 00:24:19.859 Active Namespaces 00:24:19.859 ================= 00:24:19.859 Discovery Log Page 00:24:19.859 ================== 00:24:19.859 Generation Counter: 2 00:24:19.859 Number of Records: 2 00:24:19.859 Record Format: 0 00:24:19.859 00:24:19.859 Discovery Log Entry 0 00:24:19.859 ---------------------- 00:24:19.859 Transport Type: 3 (TCP) 00:24:19.859 Address Family: 1 (IPv4) 00:24:19.859 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:19.859 Entry Flags: 00:24:19.859 Duplicate Returned Information: 1 00:24:19.859 Explicit Persistent Connection Support for Discovery: 1 00:24:19.859 Transport Requirements: 00:24:19.859 Secure Channel: Not Required 00:24:19.859 Port ID: 0 (0x0000) 00:24:19.859 Controller ID: 65535 (0xffff) 00:24:19.859 Admin Max SQ Size: 128 00:24:19.859 Transport Service Identifier: 4420 00:24:19.859 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:19.859 Transport Address: 10.0.0.3 00:24:19.859 Discovery Log Entry 1 00:24:19.859 ---------------------- 00:24:19.859 Transport Type: 3 (TCP) 00:24:19.859 Address Family: 1 (IPv4) 00:24:19.859 Subsystem Type: 2 (NVM Subsystem) 00:24:19.859 Entry Flags: 00:24:19.859 Duplicate Returned Information: 0 00:24:19.859 Explicit Persistent Connection Support for Discovery: 0 00:24:19.859 Transport Requirements: 00:24:19.859 Secure Channel: Not Required 00:24:19.859 Port ID: 0 (0x0000) 00:24:19.859 Controller ID: 65535 (0xffff) 00:24:19.859 Admin Max SQ Size: 128 00:24:19.859 Transport Service Identifier: 4420 00:24:19.859 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:24:19.859 Transport Address: 10.0.0.3 [2024-11-06 14:28:47.252133] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:24:19.859 [2024-11-06 14:28:47.252146] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:24:19.859 [2024-11-06 14:28:47.252156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.859 [2024-11-06 14:28:47.252163] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x61500000f080 00:24:19.859 [2024-11-06 14:28:47.252170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.859 [2024-11-06 14:28:47.252176] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x61500000f080 00:24:19.859 [2024-11-06 14:28:47.252182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.859 [2024-11-06 14:28:47.252188] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:24:19.859 [2024-11-06 14:28:47.252194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.859 [2024-11-06 14:28:47.252209] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.859 [2024-11-06 14:28:47.252218] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.859 [2024-11-06 14:28:47.252223] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:24:19.859 [2024-11-06 14:28:47.252233] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.859 [2024-11-06 14:28:47.252256] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:24:19.859 [2024-11-06 14:28:47.252329] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.859 [2024-11-06 14:28:47.252338] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.859 [2024-11-06 14:28:47.252342] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.859 [2024-11-06 14:28:47.252348] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:24:19.859 [2024-11-06 14:28:47.252357] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.859 [2024-11-06 14:28:47.252363] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.859 [2024-11-06 14:28:47.252371] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:24:19.859 [2024-11-06 14:28:47.252380] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.859 [2024-11-06 14:28:47.252401] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:24:19.859 [2024-11-06 14:28:47.252484] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.860 [2024-11-06 14:28:47.252496] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.860 [2024-11-06 14:28:47.252500] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.860 [2024-11-06 14:28:47.252505] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:24:19.860 [2024-11-06 14:28:47.252512] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:24:19.860 [2024-11-06 14:28:47.252518] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:24:19.860 [2024-11-06 14:28:47.252529] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.860 [2024-11-06 14:28:47.252535] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.860 [2024-11-06 14:28:47.252540] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:24:19.860 [2024-11-06 14:28:47.252553] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.860 [2024-11-06 14:28:47.252569] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:24:19.860 [2024-11-06 14:28:47.252616] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.860 [2024-11-06 14:28:47.252623] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.860 [2024-11-06 14:28:47.252627] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.860 [2024-11-06 14:28:47.252632] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:24:19.860 [2024-11-06 14:28:47.252643] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.860 [2024-11-06 14:28:47.252648] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.860 [2024-11-06 14:28:47.252653] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:24:19.860 [2024-11-06 14:28:47.252661] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.860 [2024-11-06 14:28:47.252675] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:24:19.860 [2024-11-06 14:28:47.252728] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.860 [2024-11-06 14:28:47.252739] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.860 [2024-11-06 14:28:47.252743] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.860 [2024-11-06 14:28:47.252748] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:24:19.860 [2024-11-06 14:28:47.252759] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.860 [2024-11-06 14:28:47.252764] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.860 [2024-11-06 14:28:47.252768] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:24:19.860 [2024-11-06 14:28:47.252777] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.860 [2024-11-06 14:28:47.252790] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:24:19.860 [2024-11-06 14:28:47.252842] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.860 [2024-11-06 14:28:47.252854] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.860 [2024-11-06 14:28:47.252858] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.860 [2024-11-06 14:28:47.252862] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:24:19.860 [2024-11-06 14:28:47.252873] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.860 [2024-11-06 14:28:47.252877] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.860 [2024-11-06 14:28:47.252882] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:24:19.860 [2024-11-06 14:28:47.252890] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.860 [2024-11-06 14:28:47.252904] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:24:19.860 [2024-11-06 14:28:47.252964] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.860 [2024-11-06 14:28:47.252971] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.860 [2024-11-06 14:28:47.252975] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.860 [2024-11-06 14:28:47.252980] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:24:19.860 [2024-11-06 14:28:47.252991] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.860 [2024-11-06 14:28:47.252995] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.860 [2024-11-06 14:28:47.253000] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:24:19.860 [2024-11-06 14:28:47.253008] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.860 [2024-11-06 14:28:47.253023] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:24:19.860 [2024-11-06 14:28:47.253082] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.860 [2024-11-06 14:28:47.253090] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.860 [2024-11-06 14:28:47.253094] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.860 [2024-11-06 14:28:47.253099] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:24:19.860 [2024-11-06 14:28:47.253109] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.860 [2024-11-06 14:28:47.253114] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.860 [2024-11-06 14:28:47.253118] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:24:19.860 [2024-11-06 14:28:47.253127] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.860 [2024-11-06 14:28:47.253141] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:24:19.860 [2024-11-06 14:28:47.253194] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.860 [2024-11-06 14:28:47.253200] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.860 [2024-11-06 14:28:47.253204] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.860 [2024-11-06 14:28:47.253209] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:24:19.860 [2024-11-06 14:28:47.253219] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.860 [2024-11-06 14:28:47.253224] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.860 [2024-11-06 14:28:47.253228] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:24:19.860 [2024-11-06 14:28:47.253236] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.860 [2024-11-06 14:28:47.253249] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:24:19.860 [2024-11-06 14:28:47.253296] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.860 [2024-11-06 14:28:47.253303] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.860 [2024-11-06 14:28:47.253307] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.860 [2024-11-06 14:28:47.253311] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:24:19.860 [2024-11-06 14:28:47.253321] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.860 [2024-11-06 14:28:47.253326] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.860 [2024-11-06 14:28:47.253330] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:24:19.860 [2024-11-06 14:28:47.253338] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.860 [2024-11-06 14:28:47.253352] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:24:19.860 [2024-11-06 14:28:47.253396] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.861 [2024-11-06 14:28:47.253403] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.861 [2024-11-06 14:28:47.253407] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.861 [2024-11-06 14:28:47.253411] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:24:19.861 [2024-11-06 14:28:47.253422] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.861 [2024-11-06 14:28:47.253426] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.861 [2024-11-06 14:28:47.253431] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:24:19.861 [2024-11-06 14:28:47.253441] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.861 [2024-11-06 14:28:47.253455] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:24:19.861 [2024-11-06 14:28:47.253503] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.861 [2024-11-06 14:28:47.253510] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.861 [2024-11-06 14:28:47.253514] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.861 [2024-11-06 14:28:47.253518] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:24:19.861 [2024-11-06 14:28:47.253529] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.861 [2024-11-06 14:28:47.253533] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.861 [2024-11-06 14:28:47.253538] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:24:19.861 [2024-11-06 14:28:47.253546] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.861 [2024-11-06 14:28:47.253560] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:24:19.861 [2024-11-06 14:28:47.253607] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.861 [2024-11-06 14:28:47.253614] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.861 [2024-11-06 14:28:47.253618] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.861 [2024-11-06 14:28:47.253623] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:24:19.861 [2024-11-06 14:28:47.253633] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.861 [2024-11-06 14:28:47.253638] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.861 [2024-11-06 14:28:47.253642] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:24:19.861 [2024-11-06 14:28:47.253651] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.861 [2024-11-06 14:28:47.253664] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:24:19.861 [2024-11-06 14:28:47.253712] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.861 [2024-11-06 14:28:47.253735] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.861 [2024-11-06 14:28:47.253739] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.861 [2024-11-06 14:28:47.253744] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:24:19.861 [2024-11-06 14:28:47.253755] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.861 [2024-11-06 14:28:47.253759] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.861 [2024-11-06 14:28:47.253763] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:24:19.861 [2024-11-06 14:28:47.253772] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.861 [2024-11-06 14:28:47.253787] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:24:19.861 [2024-11-06 14:28:47.253833] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.861 [2024-11-06 14:28:47.253848] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.861 [2024-11-06 14:28:47.253852] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.861 [2024-11-06 14:28:47.253857] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:24:19.861 [2024-11-06 14:28:47.253868] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.861 [2024-11-06 14:28:47.253873] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.861 [2024-11-06 14:28:47.253877] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:24:19.861 [2024-11-06 14:28:47.253886] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.861 [2024-11-06 14:28:47.253900] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:24:19.861 [2024-11-06 14:28:47.253945] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.861 [2024-11-06 14:28:47.253965] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.861 [2024-11-06 14:28:47.253969] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.861 [2024-11-06 14:28:47.253974] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:24:19.861 [2024-11-06 14:28:47.253985] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.861 [2024-11-06 14:28:47.253990] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.861 [2024-11-06 14:28:47.253994] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:24:19.861 [2024-11-06 14:28:47.254002] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.861 [2024-11-06 14:28:47.254017] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:24:19.861 [2024-11-06 14:28:47.254064] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.861 [2024-11-06 14:28:47.254071] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.861 [2024-11-06 14:28:47.254075] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.861 [2024-11-06 14:28:47.254080] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:24:19.861 [2024-11-06 14:28:47.254090] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.861 [2024-11-06 14:28:47.254095] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.861 [2024-11-06 14:28:47.254099] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:24:19.861 [2024-11-06 14:28:47.254108] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.861 [2024-11-06 14:28:47.254121] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:24:19.861 [2024-11-06 14:28:47.254165] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.861 [2024-11-06 14:28:47.254172] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.861 [2024-11-06 14:28:47.254176] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.861 [2024-11-06 14:28:47.254181] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:24:19.861 [2024-11-06 14:28:47.254191] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.861 [2024-11-06 14:28:47.254196] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.861 [2024-11-06 14:28:47.254200] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:24:19.861 [2024-11-06 14:28:47.254208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.861 [2024-11-06 14:28:47.254222] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:24:19.861 [2024-11-06 14:28:47.254266] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.861 [2024-11-06 14:28:47.254277] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.861 [2024-11-06 14:28:47.254281] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.862 [2024-11-06 14:28:47.254286] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:24:19.862 [2024-11-06 14:28:47.254297] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.862 [2024-11-06 14:28:47.254301] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.862 [2024-11-06 14:28:47.254306] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:24:19.862 [2024-11-06 14:28:47.254317] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.862 [2024-11-06 14:28:47.254332] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:24:19.862 [2024-11-06 14:28:47.254381] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.862 [2024-11-06 14:28:47.254392] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.862 [2024-11-06 14:28:47.254396] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.862 [2024-11-06 14:28:47.254401] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:24:19.862 [2024-11-06 14:28:47.254412] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.862 [2024-11-06 14:28:47.254416] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.862 [2024-11-06 14:28:47.254421] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:24:19.862 [2024-11-06 14:28:47.254429] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.862 [2024-11-06 14:28:47.254443] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:24:19.862 [2024-11-06 14:28:47.254493] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.862 [2024-11-06 14:28:47.254501] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.862 [2024-11-06 14:28:47.254505] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.862 [2024-11-06 14:28:47.254509] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:24:19.862 [2024-11-06 14:28:47.254520] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.862 [2024-11-06 14:28:47.254524] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.862 [2024-11-06 14:28:47.254529] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:24:19.862 [2024-11-06 14:28:47.254537] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.862 [2024-11-06 14:28:47.254550] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:24:19.862 [2024-11-06 14:28:47.254594] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.862 [2024-11-06 14:28:47.254603] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.862 [2024-11-06 14:28:47.254607] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.862 [2024-11-06 14:28:47.254612] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:24:19.862 [2024-11-06 14:28:47.254622] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.862 [2024-11-06 14:28:47.254627] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.862 [2024-11-06 14:28:47.254631] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:24:19.862 [2024-11-06 14:28:47.254639] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.862 [2024-11-06 14:28:47.254656] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:24:19.862 [2024-11-06 14:28:47.254706] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.862 [2024-11-06 14:28:47.254712] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.862 [2024-11-06 14:28:47.254716] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.862 [2024-11-06 14:28:47.254721] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:24:19.862 [2024-11-06 14:28:47.254731] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.862 [2024-11-06 14:28:47.254736] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.862 [2024-11-06 14:28:47.254740] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:24:19.862 [2024-11-06 14:28:47.254749] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.862 [2024-11-06 14:28:47.254762] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:24:19.862 [2024-11-06 14:28:47.254809] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.862 [2024-11-06 14:28:47.254820] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.862 [2024-11-06 14:28:47.254824] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.862 [2024-11-06 14:28:47.254829] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:24:19.862 [2024-11-06 14:28:47.254840] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.862 [2024-11-06 14:28:47.254844] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.862 [2024-11-06 14:28:47.254849] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:24:19.862 [2024-11-06 14:28:47.254857] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.862 [2024-11-06 14:28:47.254871] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:24:19.862 [2024-11-06 14:28:47.254918] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.862 [2024-11-06 14:28:47.254929] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.862 [2024-11-06 14:28:47.254933] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.862 [2024-11-06 14:28:47.254938] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:24:19.862 [2024-11-06 14:28:47.254957] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.862 [2024-11-06 14:28:47.254962] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.862 [2024-11-06 14:28:47.254967] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:24:19.862 [2024-11-06 14:28:47.254975] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.862 [2024-11-06 14:28:47.254990] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:24:19.862 [2024-11-06 14:28:47.255037] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.862 [2024-11-06 14:28:47.255044] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.862 [2024-11-06 14:28:47.255048] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.862 [2024-11-06 14:28:47.255053] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:24:19.862 [2024-11-06 14:28:47.255064] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.862 [2024-11-06 14:28:47.255068] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.862 [2024-11-06 14:28:47.255073] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:24:19.862 [2024-11-06 14:28:47.255081] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.862 [2024-11-06 14:28:47.255095] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:24:19.862 [2024-11-06 14:28:47.255147] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.862 [2024-11-06 14:28:47.255154] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.862 [2024-11-06 14:28:47.255163] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.862 [2024-11-06 14:28:47.255168] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:24:19.862 [2024-11-06 14:28:47.255179] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.862 [2024-11-06 14:28:47.255184] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.863 [2024-11-06 14:28:47.255188] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:24:19.863 [2024-11-06 14:28:47.255202] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.863 [2024-11-06 14:28:47.255216] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:24:19.863 [2024-11-06 14:28:47.255263] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.863 [2024-11-06 14:28:47.255274] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.863 [2024-11-06 14:28:47.255278] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.863 [2024-11-06 14:28:47.255283] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:24:19.863 [2024-11-06 14:28:47.255300] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.863 [2024-11-06 14:28:47.255305] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.863 [2024-11-06 14:28:47.255309] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:24:19.863 [2024-11-06 14:28:47.255318] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.863 [2024-11-06 14:28:47.255332] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:24:19.863 [2024-11-06 14:28:47.255382] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.863 [2024-11-06 14:28:47.255393] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.863 [2024-11-06 14:28:47.255397] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.863 [2024-11-06 14:28:47.255401] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:24:19.863 [2024-11-06 14:28:47.255412] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.863 [2024-11-06 14:28:47.255417] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.863 [2024-11-06 14:28:47.255421] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:24:19.863 [2024-11-06 14:28:47.255429] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.863 [2024-11-06 14:28:47.255443] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:24:19.863 [2024-11-06 14:28:47.255493] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.863 [2024-11-06 14:28:47.255502] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.863 [2024-11-06 14:28:47.255506] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.863 [2024-11-06 14:28:47.255511] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:24:19.863 [2024-11-06 14:28:47.255521] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.863 [2024-11-06 14:28:47.255526] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.863 [2024-11-06 14:28:47.255531] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:24:19.863 [2024-11-06 14:28:47.255539] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.863 [2024-11-06 14:28:47.255553] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:24:19.863 [2024-11-06 14:28:47.255600] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.863 [2024-11-06 14:28:47.255607] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.863 [2024-11-06 14:28:47.255611] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.863 [2024-11-06 14:28:47.255615] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:24:19.863 [2024-11-06 14:28:47.255626] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.863 [2024-11-06 14:28:47.255630] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.863 [2024-11-06 14:28:47.255635] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:24:19.863 [2024-11-06 14:28:47.255643] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.863 [2024-11-06 14:28:47.255657] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:24:19.863 [2024-11-06 14:28:47.255703] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.863 [2024-11-06 14:28:47.255710] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.863 [2024-11-06 14:28:47.255714] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.863 [2024-11-06 14:28:47.255719] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:24:19.863 [2024-11-06 14:28:47.255729] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.863 [2024-11-06 14:28:47.255734] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.863 [2024-11-06 14:28:47.255738] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:24:19.863 [2024-11-06 14:28:47.255746] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.863 [2024-11-06 14:28:47.255760] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:24:19.863 [2024-11-06 14:28:47.255807] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.863 [2024-11-06 14:28:47.255818] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.863 [2024-11-06 14:28:47.255822] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.863 [2024-11-06 14:28:47.255827] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:24:19.863 [2024-11-06 14:28:47.255837] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.863 [2024-11-06 14:28:47.255842] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.863 [2024-11-06 14:28:47.255846] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:24:19.863 [2024-11-06 14:28:47.255855] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.863 [2024-11-06 14:28:47.255869] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:24:19.863 [2024-11-06 14:28:47.255915] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.863 [2024-11-06 14:28:47.255926] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.863 [2024-11-06 14:28:47.255930] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.863 [2024-11-06 14:28:47.255935] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:24:19.863 [2024-11-06 14:28:47.255946] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.863 [2024-11-06 14:28:47.259968] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.863 [2024-11-06 14:28:47.259976] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:24:19.863 [2024-11-06 14:28:47.259986] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.863 [2024-11-06 14:28:47.260009] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:24:19.863 [2024-11-06 14:28:47.260072] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.863 [2024-11-06 14:28:47.260080] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.863 [2024-11-06 14:28:47.260084] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.863 [2024-11-06 14:28:47.260089] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:24:19.864 [2024-11-06 14:28:47.260099] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:24:19.864 00:24:19.864 14:28:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:24:19.864 [2024-11-06 14:28:47.337566] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:24:19.864 [2024-11-06 14:28:47.337658] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95276 ] 00:24:20.127 [2024-11-06 14:28:47.506447] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:24:20.127 [2024-11-06 14:28:47.506550] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:20.127 [2024-11-06 14:28:47.506558] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:20.127 [2024-11-06 14:28:47.506580] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:20.127 [2024-11-06 14:28:47.506591] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:20.127 [2024-11-06 14:28:47.506881] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:24:20.127 [2024-11-06 14:28:47.506940] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x61500000f080 0 00:24:20.127 [2024-11-06 14:28:47.520971] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:20.127 [2024-11-06 14:28:47.520997] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:20.127 [2024-11-06 14:28:47.521004] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:20.127 [2024-11-06 14:28:47.521010] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:20.127 [2024-11-06 14:28:47.521072] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.127 [2024-11-06 14:28:47.521083] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.127 [2024-11-06 14:28:47.521089] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:24:20.127 [2024-11-06 14:28:47.521107] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:20.127 [2024-11-06 14:28:47.521137] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:24:20.127 [2024-11-06 14:28:47.528973] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.127 [2024-11-06 14:28:47.528998] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.127 [2024-11-06 14:28:47.529003] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.127 [2024-11-06 14:28:47.529011] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:24:20.127 [2024-11-06 14:28:47.529028] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:20.127 [2024-11-06 14:28:47.529042] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:24:20.127 [2024-11-06 14:28:47.529050] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:24:20.127 [2024-11-06 14:28:47.529064] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.127 [2024-11-06 14:28:47.529073] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.127 [2024-11-06 14:28:47.529079] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:24:20.127 [2024-11-06 14:28:47.529091] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.127 [2024-11-06 14:28:47.529116] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:24:20.127 [2024-11-06 14:28:47.529192] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.127 [2024-11-06 14:28:47.529200] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.127 [2024-11-06 14:28:47.529205] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.127 [2024-11-06 14:28:47.529211] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:24:20.127 [2024-11-06 14:28:47.529219] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:24:20.127 [2024-11-06 14:28:47.529230] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:24:20.127 [2024-11-06 14:28:47.529239] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.127 [2024-11-06 14:28:47.529247] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.127 [2024-11-06 14:28:47.529253] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:24:20.127 [2024-11-06 14:28:47.529265] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.127 [2024-11-06 14:28:47.529281] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:24:20.127 [2024-11-06 14:28:47.529421] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.127 [2024-11-06 14:28:47.529436] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.127 [2024-11-06 14:28:47.529440] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.127 [2024-11-06 14:28:47.529445] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:24:20.127 [2024-11-06 14:28:47.529455] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:24:20.127 [2024-11-06 14:28:47.529465] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:24:20.127 [2024-11-06 14:28:47.529473] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.127 [2024-11-06 14:28:47.529479] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.127 [2024-11-06 14:28:47.529486] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:24:20.127 [2024-11-06 14:28:47.529496] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.127 [2024-11-06 14:28:47.529512] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:24:20.127 [2024-11-06 14:28:47.529822] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.127 [2024-11-06 14:28:47.529837] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.127 [2024-11-06 14:28:47.529841] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.127 [2024-11-06 14:28:47.529846] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:24:20.127 [2024-11-06 14:28:47.529853] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:20.127 [2024-11-06 14:28:47.529864] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.127 [2024-11-06 14:28:47.529869] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.127 [2024-11-06 14:28:47.529875] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:24:20.127 [2024-11-06 14:28:47.529884] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.127 [2024-11-06 14:28:47.529900] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:24:20.127 [2024-11-06 14:28:47.530110] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.127 [2024-11-06 14:28:47.530124] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.127 [2024-11-06 14:28:47.530130] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.127 [2024-11-06 14:28:47.530135] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:24:20.127 [2024-11-06 14:28:47.530142] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:24:20.127 [2024-11-06 14:28:47.530148] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:24:20.127 [2024-11-06 14:28:47.530159] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:20.127 [2024-11-06 14:28:47.530266] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:24:20.127 [2024-11-06 14:28:47.530277] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:20.127 [2024-11-06 14:28:47.530287] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.127 [2024-11-06 14:28:47.530292] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.127 [2024-11-06 14:28:47.530298] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:24:20.127 [2024-11-06 14:28:47.530308] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.127 [2024-11-06 14:28:47.530328] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:24:20.127 [2024-11-06 14:28:47.530486] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.127 [2024-11-06 14:28:47.530500] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.127 [2024-11-06 14:28:47.530504] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.127 [2024-11-06 14:28:47.530509] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:24:20.127 [2024-11-06 14:28:47.530515] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:20.127 [2024-11-06 14:28:47.530533] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.127 [2024-11-06 14:28:47.530539] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.127 [2024-11-06 14:28:47.530544] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:24:20.127 [2024-11-06 14:28:47.530554] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.127 [2024-11-06 14:28:47.530569] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:24:20.127 [2024-11-06 14:28:47.530623] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.127 [2024-11-06 14:28:47.530632] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.127 [2024-11-06 14:28:47.530637] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.127 [2024-11-06 14:28:47.530641] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:24:20.127 [2024-11-06 14:28:47.530647] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:20.127 [2024-11-06 14:28:47.530654] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:24:20.127 [2024-11-06 14:28:47.530669] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:24:20.127 [2024-11-06 14:28:47.530678] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:24:20.127 [2024-11-06 14:28:47.530692] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.128 [2024-11-06 14:28:47.530698] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:24:20.128 [2024-11-06 14:28:47.530710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.128 [2024-11-06 14:28:47.530725] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:24:20.128 [2024-11-06 14:28:47.531013] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:20.128 [2024-11-06 14:28:47.531028] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:20.128 [2024-11-06 14:28:47.531034] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:20.128 [2024-11-06 14:28:47.531041] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=0 00:24:20.128 [2024-11-06 14:28:47.531047] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:24:20.128 [2024-11-06 14:28:47.531053] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.128 [2024-11-06 14:28:47.531066] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:20.128 [2024-11-06 14:28:47.531071] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:20.128 [2024-11-06 14:28:47.531081] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.128 [2024-11-06 14:28:47.531088] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.128 [2024-11-06 14:28:47.531092] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.128 [2024-11-06 14:28:47.531097] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:24:20.128 [2024-11-06 14:28:47.531110] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:24:20.128 [2024-11-06 14:28:47.531117] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:24:20.128 [2024-11-06 14:28:47.531125] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:24:20.128 [2024-11-06 14:28:47.531133] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:24:20.128 [2024-11-06 14:28:47.531140] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:24:20.128 [2024-11-06 14:28:47.531146] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:24:20.128 [2024-11-06 14:28:47.531156] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:24:20.128 [2024-11-06 14:28:47.531163] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.128 [2024-11-06 14:28:47.531169] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.128 [2024-11-06 14:28:47.531174] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:24:20.128 [2024-11-06 14:28:47.531184] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:20.128 [2024-11-06 14:28:47.531203] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:24:20.128 [2024-11-06 14:28:47.531256] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.128 [2024-11-06 14:28:47.531265] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.128 [2024-11-06 14:28:47.531269] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.128 [2024-11-06 14:28:47.531274] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:24:20.128 [2024-11-06 14:28:47.531284] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.128 [2024-11-06 14:28:47.531290] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.128 [2024-11-06 14:28:47.531298] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:24:20.128 [2024-11-06 14:28:47.531309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.128 [2024-11-06 14:28:47.531316] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.128 [2024-11-06 14:28:47.531325] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.128 [2024-11-06 14:28:47.531329] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x61500000f080) 00:24:20.128 [2024-11-06 14:28:47.531337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.128 [2024-11-06 14:28:47.531343] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.128 [2024-11-06 14:28:47.531348] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.128 [2024-11-06 14:28:47.531352] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x61500000f080) 00:24:20.128 [2024-11-06 14:28:47.531359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.128 [2024-11-06 14:28:47.531366] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.128 [2024-11-06 14:28:47.531370] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.128 [2024-11-06 14:28:47.531374] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:24:20.128 [2024-11-06 14:28:47.531382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.128 [2024-11-06 14:28:47.531387] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:20.128 [2024-11-06 14:28:47.531407] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:20.128 [2024-11-06 14:28:47.531415] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.128 [2024-11-06 14:28:47.531420] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:24:20.128 [2024-11-06 14:28:47.531430] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.128 [2024-11-06 14:28:47.531452] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:24:20.128 [2024-11-06 14:28:47.531459] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:24:20.128 [2024-11-06 14:28:47.531464] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:24:20.128 [2024-11-06 14:28:47.531470] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:24:20.128 [2024-11-06 14:28:47.531475] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:24:20.128 [2024-11-06 14:28:47.531756] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.128 [2024-11-06 14:28:47.531770] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.128 [2024-11-06 14:28:47.531775] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.128 [2024-11-06 14:28:47.531779] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:24:20.128 [2024-11-06 14:28:47.531789] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:24:20.128 [2024-11-06 14:28:47.531795] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:24:20.128 [2024-11-06 14:28:47.531810] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:24:20.128 [2024-11-06 14:28:47.531817] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:24:20.128 [2024-11-06 14:28:47.531824] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.128 [2024-11-06 14:28:47.531829] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.128 [2024-11-06 14:28:47.531834] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:24:20.128 [2024-11-06 14:28:47.531844] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:20.128 [2024-11-06 14:28:47.531859] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:24:20.128 [2024-11-06 14:28:47.531991] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.128 [2024-11-06 14:28:47.532006] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.128 [2024-11-06 14:28:47.532011] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.128 [2024-11-06 14:28:47.532016] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:24:20.128 [2024-11-06 14:28:47.532095] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:24:20.128 [2024-11-06 14:28:47.532110] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:24:20.128 [2024-11-06 14:28:47.532122] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.128 [2024-11-06 14:28:47.532127] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:24:20.128 [2024-11-06 14:28:47.532139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.128 [2024-11-06 14:28:47.532156] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:24:20.128 [2024-11-06 14:28:47.532289] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:20.128 [2024-11-06 14:28:47.532303] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:20.128 [2024-11-06 14:28:47.532314] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:20.128 [2024-11-06 14:28:47.532319] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:24:20.128 [2024-11-06 14:28:47.532325] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:24:20.128 [2024-11-06 14:28:47.532330] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.128 [2024-11-06 14:28:47.532339] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:20.128 [2024-11-06 14:28:47.532343] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:20.128 [2024-11-06 14:28:47.532391] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.128 [2024-11-06 14:28:47.532398] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.128 [2024-11-06 14:28:47.532402] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.128 [2024-11-06 14:28:47.532407] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:24:20.128 [2024-11-06 14:28:47.532432] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:24:20.128 [2024-11-06 14:28:47.532450] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:24:20.128 [2024-11-06 14:28:47.532466] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:24:20.128 [2024-11-06 14:28:47.532479] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.128 [2024-11-06 14:28:47.532485] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:24:20.128 [2024-11-06 14:28:47.532496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.129 [2024-11-06 14:28:47.532513] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:24:20.129 [2024-11-06 14:28:47.532778] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:20.129 [2024-11-06 14:28:47.532792] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:20.129 [2024-11-06 14:28:47.532796] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:20.129 [2024-11-06 14:28:47.532801] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:24:20.129 [2024-11-06 14:28:47.532806] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:24:20.129 [2024-11-06 14:28:47.532811] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.129 [2024-11-06 14:28:47.532819] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:20.129 [2024-11-06 14:28:47.532824] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:20.129 [2024-11-06 14:28:47.532832] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.129 [2024-11-06 14:28:47.532841] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.129 [2024-11-06 14:28:47.532845] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.129 [2024-11-06 14:28:47.532850] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:24:20.129 [2024-11-06 14:28:47.532876] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:24:20.129 [2024-11-06 14:28:47.532890] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:24:20.129 [2024-11-06 14:28:47.532903] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.129 [2024-11-06 14:28:47.532908] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:24:20.129 [2024-11-06 14:28:47.532918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.129 [2024-11-06 14:28:47.532935] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:24:20.129 [2024-11-06 14:28:47.536968] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:20.129 [2024-11-06 14:28:47.536991] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:20.129 [2024-11-06 14:28:47.536996] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:20.129 [2024-11-06 14:28:47.537001] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:24:20.129 [2024-11-06 14:28:47.537007] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:24:20.129 [2024-11-06 14:28:47.537012] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.129 [2024-11-06 14:28:47.537020] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:20.129 [2024-11-06 14:28:47.537025] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:20.129 [2024-11-06 14:28:47.537037] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.129 [2024-11-06 14:28:47.537046] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.129 [2024-11-06 14:28:47.537050] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.129 [2024-11-06 14:28:47.537055] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:24:20.129 [2024-11-06 14:28:47.537083] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:24:20.129 [2024-11-06 14:28:47.537093] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:24:20.129 [2024-11-06 14:28:47.537107] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:24:20.129 [2024-11-06 14:28:47.537116] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:24:20.129 [2024-11-06 14:28:47.537122] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:24:20.129 [2024-11-06 14:28:47.537129] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:24:20.129 [2024-11-06 14:28:47.537135] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:24:20.129 [2024-11-06 14:28:47.537141] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:24:20.129 [2024-11-06 14:28:47.537149] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:24:20.129 [2024-11-06 14:28:47.537176] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.129 [2024-11-06 14:28:47.537181] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:24:20.129 [2024-11-06 14:28:47.537192] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.129 [2024-11-06 14:28:47.537200] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.129 [2024-11-06 14:28:47.537207] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.129 [2024-11-06 14:28:47.537212] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:24:20.129 [2024-11-06 14:28:47.537220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.129 [2024-11-06 14:28:47.537244] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:24:20.129 [2024-11-06 14:28:47.537251] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:24:20.129 [2024-11-06 14:28:47.537562] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.129 [2024-11-06 14:28:47.537577] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.129 [2024-11-06 14:28:47.537584] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.129 [2024-11-06 14:28:47.537591] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:24:20.129 [2024-11-06 14:28:47.537600] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.129 [2024-11-06 14:28:47.537607] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.129 [2024-11-06 14:28:47.537611] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.129 [2024-11-06 14:28:47.537616] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:24:20.129 [2024-11-06 14:28:47.537626] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.129 [2024-11-06 14:28:47.537631] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:24:20.129 [2024-11-06 14:28:47.537639] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.129 [2024-11-06 14:28:47.537655] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:24:20.129 [2024-11-06 14:28:47.537860] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.129 [2024-11-06 14:28:47.537873] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.129 [2024-11-06 14:28:47.537878] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.129 [2024-11-06 14:28:47.537882] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:24:20.129 [2024-11-06 14:28:47.537892] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.129 [2024-11-06 14:28:47.537897] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:24:20.129 [2024-11-06 14:28:47.537906] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.129 [2024-11-06 14:28:47.537920] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:24:20.129 [2024-11-06 14:28:47.538054] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.129 [2024-11-06 14:28:47.538068] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.129 [2024-11-06 14:28:47.538072] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.129 [2024-11-06 14:28:47.538077] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:24:20.129 [2024-11-06 14:28:47.538087] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.129 [2024-11-06 14:28:47.538092] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:24:20.129 [2024-11-06 14:28:47.538102] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.129 [2024-11-06 14:28:47.538117] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:24:20.129 [2024-11-06 14:28:47.538231] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.129 [2024-11-06 14:28:47.538244] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.129 [2024-11-06 14:28:47.538248] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.129 [2024-11-06 14:28:47.538252] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:24:20.129 [2024-11-06 14:28:47.538275] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.129 [2024-11-06 14:28:47.538281] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:24:20.129 [2024-11-06 14:28:47.538291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.129 [2024-11-06 14:28:47.538300] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.129 [2024-11-06 14:28:47.538305] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:24:20.129 [2024-11-06 14:28:47.538314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.129 [2024-11-06 14:28:47.538323] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.129 [2024-11-06 14:28:47.538328] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x61500000f080) 00:24:20.129 [2024-11-06 14:28:47.538337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.129 [2024-11-06 14:28:47.538352] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.129 [2024-11-06 14:28:47.538358] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x61500000f080) 00:24:20.129 [2024-11-06 14:28:47.538366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.129 [2024-11-06 14:28:47.538384] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:24:20.129 [2024-11-06 14:28:47.538391] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:24:20.129 [2024-11-06 14:28:47.538396] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001ba00, cid 6, qid 0 00:24:20.129 [2024-11-06 14:28:47.538402] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:24:20.129 [2024-11-06 14:28:47.538773] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:20.129 [2024-11-06 14:28:47.538789] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:20.130 [2024-11-06 14:28:47.538794] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:20.130 [2024-11-06 14:28:47.538800] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=8192, cccid=5 00:24:20.130 [2024-11-06 14:28:47.538805] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b880) on tqpair(0x61500000f080): expected_datao=0, payload_size=8192 00:24:20.130 [2024-11-06 14:28:47.538811] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.130 [2024-11-06 14:28:47.538832] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:20.130 [2024-11-06 14:28:47.538837] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:20.130 [2024-11-06 14:28:47.538847] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:20.130 [2024-11-06 14:28:47.538855] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:20.130 [2024-11-06 14:28:47.538859] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:20.130 [2024-11-06 14:28:47.538864] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=512, cccid=4 00:24:20.130 [2024-11-06 14:28:47.538868] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=512 00:24:20.130 [2024-11-06 14:28:47.538873] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.130 [2024-11-06 14:28:47.538881] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:20.130 [2024-11-06 14:28:47.538885] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:20.130 [2024-11-06 14:28:47.538891] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:20.130 [2024-11-06 14:28:47.538898] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:20.130 [2024-11-06 14:28:47.538902] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:20.130 [2024-11-06 14:28:47.538907] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=512, cccid=6 00:24:20.130 [2024-11-06 14:28:47.538911] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001ba00) on tqpair(0x61500000f080): expected_datao=0, payload_size=512 00:24:20.130 [2024-11-06 14:28:47.538920] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.130 [2024-11-06 14:28:47.538929] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:20.130 [2024-11-06 14:28:47.538933] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:20.130 [2024-11-06 14:28:47.538939] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:20.130 [2024-11-06 14:28:47.538946] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:20.130 [2024-11-06 14:28:47.538962] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:20.130 [2024-11-06 14:28:47.538966] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=7 00:24:20.130 [2024-11-06 14:28:47.538971] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001bb80) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:24:20.130 [2024-11-06 14:28:47.538976] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.130 [2024-11-06 14:28:47.538984] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:20.130 [2024-11-06 14:28:47.538990] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:20.130 [2024-11-06 14:28:47.538997] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.130 [2024-11-06 14:28:47.539003] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.130 [2024-11-06 14:28:47.539007] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.130 [2024-11-06 14:28:47.539013] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:24:20.130 [2024-11-06 14:28:47.539036] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.130 [2024-11-06 14:28:47.539042] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.130 [2024-11-06 14:28:47.539046] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.130 [2024-11-06 14:28:47.539051] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:24:20.130 [2024-11-06 14:28:47.539062] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.130 [2024-11-06 14:28:47.539070] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.130 [2024-11-06 14:28:47.539074] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.130 [2024-11-06 14:28:47.539079] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001ba00) on tqpair=0x61500000f080 00:24:20.130 [2024-11-06 14:28:47.539088] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.130 [2024-11-06 14:28:47.539094] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.130 [2024-11-06 14:28:47.539098] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.130 [2024-11-06 14:28:47.539103] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x61500000f080 00:24:20.130 ===================================================== 00:24:20.130 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:24:20.130 ===================================================== 00:24:20.130 Controller Capabilities/Features 00:24:20.130 ================================ 00:24:20.130 Vendor ID: 8086 00:24:20.130 Subsystem Vendor ID: 8086 00:24:20.130 Serial Number: SPDK00000000000001 00:24:20.130 Model Number: SPDK bdev Controller 00:24:20.130 Firmware Version: 25.01 00:24:20.130 Recommended Arb Burst: 6 00:24:20.130 IEEE OUI Identifier: e4 d2 5c 00:24:20.130 Multi-path I/O 00:24:20.130 May have multiple subsystem ports: Yes 00:24:20.130 May have multiple controllers: Yes 00:24:20.130 Associated with SR-IOV VF: No 00:24:20.130 Max Data Transfer Size: 131072 00:24:20.130 Max Number of Namespaces: 32 00:24:20.130 Max Number of I/O Queues: 127 00:24:20.130 NVMe Specification Version (VS): 1.3 00:24:20.130 NVMe Specification Version (Identify): 1.3 00:24:20.130 Maximum Queue Entries: 128 00:24:20.130 Contiguous Queues Required: Yes 00:24:20.130 Arbitration Mechanisms Supported 00:24:20.130 Weighted Round Robin: Not Supported 00:24:20.130 Vendor Specific: Not Supported 00:24:20.130 Reset Timeout: 15000 ms 00:24:20.130 Doorbell Stride: 4 bytes 00:24:20.130 NVM Subsystem Reset: Not Supported 00:24:20.130 Command Sets Supported 00:24:20.130 NVM Command Set: Supported 00:24:20.130 Boot Partition: Not Supported 00:24:20.130 Memory Page Size Minimum: 4096 bytes 00:24:20.130 Memory Page Size Maximum: 4096 bytes 00:24:20.130 Persistent Memory Region: Not Supported 00:24:20.130 Optional Asynchronous Events Supported 00:24:20.130 Namespace Attribute Notices: Supported 00:24:20.130 Firmware Activation Notices: Not Supported 00:24:20.130 ANA Change Notices: Not Supported 00:24:20.130 PLE Aggregate Log Change Notices: Not Supported 00:24:20.130 LBA Status Info Alert Notices: Not Supported 00:24:20.130 EGE Aggregate Log Change Notices: Not Supported 00:24:20.130 Normal NVM Subsystem Shutdown event: Not Supported 00:24:20.130 Zone Descriptor Change Notices: Not Supported 00:24:20.130 Discovery Log Change Notices: Not Supported 00:24:20.130 Controller Attributes 00:24:20.130 128-bit Host Identifier: Supported 00:24:20.130 Non-Operational Permissive Mode: Not Supported 00:24:20.130 NVM Sets: Not Supported 00:24:20.130 Read Recovery Levels: Not Supported 00:24:20.130 Endurance Groups: Not Supported 00:24:20.130 Predictable Latency Mode: Not Supported 00:24:20.130 Traffic Based Keep ALive: Not Supported 00:24:20.130 Namespace Granularity: Not Supported 00:24:20.130 SQ Associations: Not Supported 00:24:20.130 UUID List: Not Supported 00:24:20.130 Multi-Domain Subsystem: Not Supported 00:24:20.130 Fixed Capacity Management: Not Supported 00:24:20.130 Variable Capacity Management: Not Supported 00:24:20.130 Delete Endurance Group: Not Supported 00:24:20.130 Delete NVM Set: Not Supported 00:24:20.130 Extended LBA Formats Supported: Not Supported 00:24:20.130 Flexible Data Placement Supported: Not Supported 00:24:20.130 00:24:20.130 Controller Memory Buffer Support 00:24:20.130 ================================ 00:24:20.130 Supported: No 00:24:20.130 00:24:20.130 Persistent Memory Region Support 00:24:20.130 ================================ 00:24:20.130 Supported: No 00:24:20.130 00:24:20.130 Admin Command Set Attributes 00:24:20.130 ============================ 00:24:20.130 Security Send/Receive: Not Supported 00:24:20.130 Format NVM: Not Supported 00:24:20.130 Firmware Activate/Download: Not Supported 00:24:20.130 Namespace Management: Not Supported 00:24:20.130 Device Self-Test: Not Supported 00:24:20.130 Directives: Not Supported 00:24:20.130 NVMe-MI: Not Supported 00:24:20.130 Virtualization Management: Not Supported 00:24:20.130 Doorbell Buffer Config: Not Supported 00:24:20.130 Get LBA Status Capability: Not Supported 00:24:20.130 Command & Feature Lockdown Capability: Not Supported 00:24:20.130 Abort Command Limit: 4 00:24:20.130 Async Event Request Limit: 4 00:24:20.130 Number of Firmware Slots: N/A 00:24:20.130 Firmware Slot 1 Read-Only: N/A 00:24:20.130 Firmware Activation Without Reset: N/A 00:24:20.130 Multiple Update Detection Support: N/A 00:24:20.130 Firmware Update Granularity: No Information Provided 00:24:20.130 Per-Namespace SMART Log: No 00:24:20.130 Asymmetric Namespace Access Log Page: Not Supported 00:24:20.130 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:24:20.130 Command Effects Log Page: Supported 00:24:20.130 Get Log Page Extended Data: Supported 00:24:20.130 Telemetry Log Pages: Not Supported 00:24:20.130 Persistent Event Log Pages: Not Supported 00:24:20.130 Supported Log Pages Log Page: May Support 00:24:20.130 Commands Supported & Effects Log Page: Not Supported 00:24:20.130 Feature Identifiers & Effects Log Page:May Support 00:24:20.130 NVMe-MI Commands & Effects Log Page: May Support 00:24:20.130 Data Area 4 for Telemetry Log: Not Supported 00:24:20.130 Error Log Page Entries Supported: 128 00:24:20.130 Keep Alive: Supported 00:24:20.130 Keep Alive Granularity: 10000 ms 00:24:20.130 00:24:20.130 NVM Command Set Attributes 00:24:20.130 ========================== 00:24:20.130 Submission Queue Entry Size 00:24:20.130 Max: 64 00:24:20.131 Min: 64 00:24:20.131 Completion Queue Entry Size 00:24:20.131 Max: 16 00:24:20.131 Min: 16 00:24:20.131 Number of Namespaces: 32 00:24:20.131 Compare Command: Supported 00:24:20.131 Write Uncorrectable Command: Not Supported 00:24:20.131 Dataset Management Command: Supported 00:24:20.131 Write Zeroes Command: Supported 00:24:20.131 Set Features Save Field: Not Supported 00:24:20.131 Reservations: Supported 00:24:20.131 Timestamp: Not Supported 00:24:20.131 Copy: Supported 00:24:20.131 Volatile Write Cache: Present 00:24:20.131 Atomic Write Unit (Normal): 1 00:24:20.131 Atomic Write Unit (PFail): 1 00:24:20.131 Atomic Compare & Write Unit: 1 00:24:20.131 Fused Compare & Write: Supported 00:24:20.131 Scatter-Gather List 00:24:20.131 SGL Command Set: Supported 00:24:20.131 SGL Keyed: Supported 00:24:20.131 SGL Bit Bucket Descriptor: Not Supported 00:24:20.131 SGL Metadata Pointer: Not Supported 00:24:20.131 Oversized SGL: Not Supported 00:24:20.131 SGL Metadata Address: Not Supported 00:24:20.131 SGL Offset: Supported 00:24:20.131 Transport SGL Data Block: Not Supported 00:24:20.131 Replay Protected Memory Block: Not Supported 00:24:20.131 00:24:20.131 Firmware Slot Information 00:24:20.131 ========================= 00:24:20.131 Active slot: 1 00:24:20.131 Slot 1 Firmware Revision: 25.01 00:24:20.131 00:24:20.131 00:24:20.131 Commands Supported and Effects 00:24:20.131 ============================== 00:24:20.131 Admin Commands 00:24:20.131 -------------- 00:24:20.131 Get Log Page (02h): Supported 00:24:20.131 Identify (06h): Supported 00:24:20.131 Abort (08h): Supported 00:24:20.131 Set Features (09h): Supported 00:24:20.131 Get Features (0Ah): Supported 00:24:20.131 Asynchronous Event Request (0Ch): Supported 00:24:20.131 Keep Alive (18h): Supported 00:24:20.131 I/O Commands 00:24:20.131 ------------ 00:24:20.131 Flush (00h): Supported LBA-Change 00:24:20.131 Write (01h): Supported LBA-Change 00:24:20.131 Read (02h): Supported 00:24:20.131 Compare (05h): Supported 00:24:20.131 Write Zeroes (08h): Supported LBA-Change 00:24:20.131 Dataset Management (09h): Supported LBA-Change 00:24:20.131 Copy (19h): Supported LBA-Change 00:24:20.131 00:24:20.131 Error Log 00:24:20.131 ========= 00:24:20.131 00:24:20.131 Arbitration 00:24:20.131 =========== 00:24:20.131 Arbitration Burst: 1 00:24:20.131 00:24:20.131 Power Management 00:24:20.131 ================ 00:24:20.131 Number of Power States: 1 00:24:20.131 Current Power State: Power State #0 00:24:20.131 Power State #0: 00:24:20.131 Max Power: 0.00 W 00:24:20.131 Non-Operational State: Operational 00:24:20.131 Entry Latency: Not Reported 00:24:20.131 Exit Latency: Not Reported 00:24:20.131 Relative Read Throughput: 0 00:24:20.131 Relative Read Latency: 0 00:24:20.131 Relative Write Throughput: 0 00:24:20.131 Relative Write Latency: 0 00:24:20.131 Idle Power: Not Reported 00:24:20.131 Active Power: Not Reported 00:24:20.131 Non-Operational Permissive Mode: Not Supported 00:24:20.131 00:24:20.131 Health Information 00:24:20.131 ================== 00:24:20.131 Critical Warnings: 00:24:20.131 Available Spare Space: OK 00:24:20.131 Temperature: OK 00:24:20.131 Device Reliability: OK 00:24:20.131 Read Only: No 00:24:20.131 Volatile Memory Backup: OK 00:24:20.131 Current Temperature: 0 Kelvin (-273 Celsius) 00:24:20.131 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:24:20.131 Available Spare: 0% 00:24:20.131 Available Spare Threshold: 0% 00:24:20.131 Life Percentage Used:[2024-11-06 14:28:47.539247] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.131 [2024-11-06 14:28:47.539254] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x61500000f080) 00:24:20.131 [2024-11-06 14:28:47.539264] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.131 [2024-11-06 14:28:47.539285] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:24:20.131 [2024-11-06 14:28:47.539402] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.131 [2024-11-06 14:28:47.539418] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.131 [2024-11-06 14:28:47.539423] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.131 [2024-11-06 14:28:47.539429] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x61500000f080 00:24:20.131 [2024-11-06 14:28:47.539489] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:24:20.131 [2024-11-06 14:28:47.539505] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:24:20.131 [2024-11-06 14:28:47.539515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.131 [2024-11-06 14:28:47.539522] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x61500000f080 00:24:20.131 [2024-11-06 14:28:47.539529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.131 [2024-11-06 14:28:47.539535] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x61500000f080 00:24:20.131 [2024-11-06 14:28:47.539541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.131 [2024-11-06 14:28:47.539546] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:24:20.131 [2024-11-06 14:28:47.539553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.131 [2024-11-06 14:28:47.539562] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.131 [2024-11-06 14:28:47.539570] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.131 [2024-11-06 14:28:47.539575] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:24:20.131 [2024-11-06 14:28:47.539585] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.131 [2024-11-06 14:28:47.539605] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:24:20.131 [2024-11-06 14:28:47.539800] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.131 [2024-11-06 14:28:47.539808] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.131 [2024-11-06 14:28:47.539813] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.131 [2024-11-06 14:28:47.539821] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:24:20.131 [2024-11-06 14:28:47.539831] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.131 [2024-11-06 14:28:47.539836] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.131 [2024-11-06 14:28:47.539841] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:24:20.131 [2024-11-06 14:28:47.539851] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.131 [2024-11-06 14:28:47.539871] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:24:20.131 [2024-11-06 14:28:47.540024] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.131 [2024-11-06 14:28:47.540037] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.131 [2024-11-06 14:28:47.540042] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.131 [2024-11-06 14:28:47.540047] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:24:20.131 [2024-11-06 14:28:47.540053] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:24:20.131 [2024-11-06 14:28:47.540060] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:24:20.131 [2024-11-06 14:28:47.540075] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.131 [2024-11-06 14:28:47.540081] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.131 [2024-11-06 14:28:47.540086] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:24:20.131 [2024-11-06 14:28:47.540095] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.131 [2024-11-06 14:28:47.540112] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:24:20.132 [2024-11-06 14:28:47.540229] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.132 [2024-11-06 14:28:47.540244] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.132 [2024-11-06 14:28:47.540248] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.132 [2024-11-06 14:28:47.540253] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:24:20.132 [2024-11-06 14:28:47.540266] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.132 [2024-11-06 14:28:47.540271] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.132 [2024-11-06 14:28:47.540276] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:24:20.132 [2024-11-06 14:28:47.540285] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.132 [2024-11-06 14:28:47.540299] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:24:20.132 [2024-11-06 14:28:47.540364] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.132 [2024-11-06 14:28:47.540375] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.132 [2024-11-06 14:28:47.540379] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.132 [2024-11-06 14:28:47.540384] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:24:20.132 [2024-11-06 14:28:47.540394] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.132 [2024-11-06 14:28:47.540399] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.132 [2024-11-06 14:28:47.540404] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:24:20.132 [2024-11-06 14:28:47.540412] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.132 [2024-11-06 14:28:47.540427] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:24:20.132 [2024-11-06 14:28:47.540634] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.132 [2024-11-06 14:28:47.540649] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.132 [2024-11-06 14:28:47.540654] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.132 [2024-11-06 14:28:47.540659] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:24:20.132 [2024-11-06 14:28:47.540669] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.132 [2024-11-06 14:28:47.540674] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.132 [2024-11-06 14:28:47.540678] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:24:20.132 [2024-11-06 14:28:47.540688] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.132 [2024-11-06 14:28:47.540703] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:24:20.132 [2024-11-06 14:28:47.540846] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.132 [2024-11-06 14:28:47.540858] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.132 [2024-11-06 14:28:47.540862] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.132 [2024-11-06 14:28:47.540867] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:24:20.132 [2024-11-06 14:28:47.540878] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.132 [2024-11-06 14:28:47.540882] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.132 [2024-11-06 14:28:47.540887] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:24:20.132 [2024-11-06 14:28:47.540895] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.132 [2024-11-06 14:28:47.540909] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:24:20.132 [2024-11-06 14:28:47.544967] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.132 [2024-11-06 14:28:47.544986] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.132 [2024-11-06 14:28:47.544991] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.132 [2024-11-06 14:28:47.544996] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:24:20.132 [2024-11-06 14:28:47.545010] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.132 [2024-11-06 14:28:47.545015] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.132 [2024-11-06 14:28:47.545020] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:24:20.132 [2024-11-06 14:28:47.545033] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.132 [2024-11-06 14:28:47.545054] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:24:20.132 [2024-11-06 14:28:47.545274] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.132 [2024-11-06 14:28:47.545287] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.132 [2024-11-06 14:28:47.545291] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.132 [2024-11-06 14:28:47.545296] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:24:20.132 [2024-11-06 14:28:47.545305] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 5 milliseconds 00:24:20.132 0% 00:24:20.132 Data Units Read: 0 00:24:20.132 Data Units Written: 0 00:24:20.132 Host Read Commands: 0 00:24:20.132 Host Write Commands: 0 00:24:20.132 Controller Busy Time: 0 minutes 00:24:20.132 Power Cycles: 0 00:24:20.132 Power On Hours: 0 hours 00:24:20.132 Unsafe Shutdowns: 0 00:24:20.132 Unrecoverable Media Errors: 0 00:24:20.132 Lifetime Error Log Entries: 0 00:24:20.132 Warning Temperature Time: 0 minutes 00:24:20.132 Critical Temperature Time: 0 minutes 00:24:20.132 00:24:20.132 Number of Queues 00:24:20.132 ================ 00:24:20.132 Number of I/O Submission Queues: 127 00:24:20.132 Number of I/O Completion Queues: 127 00:24:20.132 00:24:20.132 Active Namespaces 00:24:20.132 ================= 00:24:20.132 Namespace ID:1 00:24:20.132 Error Recovery Timeout: Unlimited 00:24:20.132 Command Set Identifier: NVM (00h) 00:24:20.132 Deallocate: Supported 00:24:20.132 Deallocated/Unwritten Error: Not Supported 00:24:20.132 Deallocated Read Value: Unknown 00:24:20.132 Deallocate in Write Zeroes: Not Supported 00:24:20.132 Deallocated Guard Field: 0xFFFF 00:24:20.132 Flush: Supported 00:24:20.132 Reservation: Supported 00:24:20.132 Namespace Sharing Capabilities: Multiple Controllers 00:24:20.132 Size (in LBAs): 131072 (0GiB) 00:24:20.132 Capacity (in LBAs): 131072 (0GiB) 00:24:20.132 Utilization (in LBAs): 131072 (0GiB) 00:24:20.132 NGUID: ABCDEF0123456789ABCDEF0123456789 00:24:20.132 EUI64: ABCDEF0123456789 00:24:20.132 UUID: dca0a120-174d-4000-b920-d8e1a5e9b58a 00:24:20.132 Thin Provisioning: Not Supported 00:24:20.132 Per-NS Atomic Units: Yes 00:24:20.132 Atomic Boundary Size (Normal): 0 00:24:20.132 Atomic Boundary Size (PFail): 0 00:24:20.132 Atomic Boundary Offset: 0 00:24:20.132 Maximum Single Source Range Length: 65535 00:24:20.132 Maximum Copy Length: 65535 00:24:20.132 Maximum Source Range Count: 1 00:24:20.132 NGUID/EUI64 Never Reused: No 00:24:20.132 Namespace Write Protected: No 00:24:20.132 Number of LBA Formats: 1 00:24:20.132 Current LBA Format: LBA Format #00 00:24:20.132 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:20.132 00:24:20.132 14:28:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:24:20.132 14:28:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:20.132 14:28:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.132 14:28:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:20.132 14:28:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.132 14:28:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:24:20.132 14:28:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:24:20.132 14:28:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:20.132 14:28:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:24:20.132 14:28:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:20.132 14:28:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:24:20.132 14:28:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:20.132 14:28:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:20.132 rmmod nvme_tcp 00:24:20.132 rmmod nvme_fabrics 00:24:20.132 rmmod nvme_keyring 00:24:20.132 14:28:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:20.132 14:28:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:24:20.132 14:28:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:24:20.132 14:28:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 95220 ']' 00:24:20.132 14:28:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 95220 00:24:20.132 14:28:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@952 -- # '[' -z 95220 ']' 00:24:20.132 14:28:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # kill -0 95220 00:24:20.132 14:28:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # uname 00:24:20.132 14:28:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:20.132 14:28:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 95220 00:24:20.394 14:28:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:20.394 14:28:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:20.394 killing process with pid 95220 00:24:20.394 14:28:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@970 -- # echo 'killing process with pid 95220' 00:24:20.394 14:28:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@971 -- # kill 95220 00:24:20.394 14:28:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@976 -- # wait 95220 00:24:21.334 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:21.334 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:21.334 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:21.334 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:24:21.334 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:24:21.334 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:24:21.334 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:21.334 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:21.334 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:21.334 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:21.334 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:21.334 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:21.334 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:21.334 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:21.334 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:21.334 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:21.334 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:21.334 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:21.334 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:21.334 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:21.334 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:21.334 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:21.334 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:21.334 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:21.334 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:21.334 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:21.334 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:24:21.334 00:24:21.334 real 0m3.395s 00:24:21.334 user 0m9.304s 00:24:21.334 sys 0m0.718s 00:24:21.334 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:21.334 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:21.334 ************************************ 00:24:21.334 END TEST nvmf_identify 00:24:21.334 ************************************ 00:24:21.334 14:28:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:21.334 14:28:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:21.334 14:28:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:21.334 14:28:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.334 ************************************ 00:24:21.334 START TEST nvmf_perf 00:24:21.334 ************************************ 00:24:21.334 14:28:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:21.334 * Looking for test storage... 00:24:21.599 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:21.599 14:28:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:21.599 14:28:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:21.599 14:28:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lcov --version 00:24:21.599 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:21.599 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:21.599 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:21.599 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:21.599 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:24:21.599 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:24:21.599 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:24:21.599 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:24:21.599 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:24:21.599 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:24:21.599 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:24:21.599 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:21.599 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:24:21.599 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:24:21.599 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:21.599 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:21.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:21.600 --rc genhtml_branch_coverage=1 00:24:21.600 --rc genhtml_function_coverage=1 00:24:21.600 --rc genhtml_legend=1 00:24:21.600 --rc geninfo_all_blocks=1 00:24:21.600 --rc geninfo_unexecuted_blocks=1 00:24:21.600 00:24:21.600 ' 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:21.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:21.600 --rc genhtml_branch_coverage=1 00:24:21.600 --rc genhtml_function_coverage=1 00:24:21.600 --rc genhtml_legend=1 00:24:21.600 --rc geninfo_all_blocks=1 00:24:21.600 --rc geninfo_unexecuted_blocks=1 00:24:21.600 00:24:21.600 ' 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:21.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:21.600 --rc genhtml_branch_coverage=1 00:24:21.600 --rc genhtml_function_coverage=1 00:24:21.600 --rc genhtml_legend=1 00:24:21.600 --rc geninfo_all_blocks=1 00:24:21.600 --rc geninfo_unexecuted_blocks=1 00:24:21.600 00:24:21.600 ' 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:21.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:21.600 --rc genhtml_branch_coverage=1 00:24:21.600 --rc genhtml_function_coverage=1 00:24:21.600 --rc genhtml_legend=1 00:24:21.600 --rc geninfo_all_blocks=1 00:24:21.600 --rc geninfo_unexecuted_blocks=1 00:24:21.600 00:24:21.600 ' 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:21.600 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:21.600 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:21.601 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:21.601 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:21.601 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:21.601 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:21.601 Cannot find device "nvmf_init_br" 00:24:21.601 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:24:21.601 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:21.601 Cannot find device "nvmf_init_br2" 00:24:21.601 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:24:21.601 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:21.601 Cannot find device "nvmf_tgt_br" 00:24:21.601 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:24:21.601 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:21.601 Cannot find device "nvmf_tgt_br2" 00:24:21.601 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:24:21.601 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:21.601 Cannot find device "nvmf_init_br" 00:24:21.601 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:24:21.601 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:21.601 Cannot find device "nvmf_init_br2" 00:24:21.601 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:24:21.601 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:21.601 Cannot find device "nvmf_tgt_br" 00:24:21.601 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:24:21.601 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:21.601 Cannot find device "nvmf_tgt_br2" 00:24:21.601 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:24:21.601 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:21.601 Cannot find device "nvmf_br" 00:24:21.601 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:24:21.601 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:21.601 Cannot find device "nvmf_init_if" 00:24:21.601 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:24:21.601 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:21.601 Cannot find device "nvmf_init_if2" 00:24:21.601 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:24:21.601 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:21.601 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:21.601 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:24:21.601 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:21.601 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:21.601 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:24:21.601 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:21.601 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:21.601 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:21.601 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:21.601 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:21.601 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:21.871 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:21.871 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:21.871 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:21.871 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:21.871 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:21.871 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:21.871 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:21.871 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:21.871 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:21.871 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:21.871 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:21.871 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:21.871 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:21.871 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:21.871 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:21.871 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:21.871 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:21.871 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:21.871 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:21.871 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:21.871 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:21.872 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:21.872 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:21.872 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:21.872 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:21.872 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:21.872 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:21.872 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:21.872 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:24:21.872 00:24:21.872 --- 10.0.0.3 ping statistics --- 00:24:21.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:21.872 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:24:21.872 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:21.872 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:21.872 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.033 ms 00:24:21.872 00:24:21.872 --- 10.0.0.4 ping statistics --- 00:24:21.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:21.872 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:24:21.872 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:21.872 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:21.872 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:24:21.872 00:24:21.872 --- 10.0.0.1 ping statistics --- 00:24:21.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:21.872 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:24:21.872 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:21.872 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:21.872 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.032 ms 00:24:21.872 00:24:21.872 --- 10.0.0.2 ping statistics --- 00:24:21.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:21.872 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:24:21.872 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:21.872 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@461 -- # return 0 00:24:21.872 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:21.872 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:21.872 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:21.872 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:21.872 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:21.872 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:21.872 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:21.872 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:21.872 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:21.872 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:21.872 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:21.872 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=95503 00:24:21.872 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 95503 00:24:21.872 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@833 -- # '[' -z 95503 ']' 00:24:21.872 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:21.872 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:21.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:21.872 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:21.872 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:21.872 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:21.872 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:21.872 [2024-11-06 14:28:49.504300] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:24:21.872 [2024-11-06 14:28:49.504417] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:22.132 [2024-11-06 14:28:49.664063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:22.132 [2024-11-06 14:28:49.764090] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:22.132 [2024-11-06 14:28:49.764133] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:22.132 [2024-11-06 14:28:49.764144] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:22.132 [2024-11-06 14:28:49.764152] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:22.132 [2024-11-06 14:28:49.764160] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:22.390 [2024-11-06 14:28:49.765730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:22.390 [2024-11-06 14:28:49.766025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:22.390 [2024-11-06 14:28:49.766193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:22.390 [2024-11-06 14:28:49.766343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:22.956 14:28:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:22.956 14:28:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@866 -- # return 0 00:24:22.956 14:28:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:22.956 14:28:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:22.956 14:28:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:22.956 14:28:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:22.956 14:28:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:24:22.956 14:28:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:24:23.213 14:28:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:23.213 14:28:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:24:23.471 14:28:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:24:23.471 14:28:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:23.728 14:28:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:23.728 14:28:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:24:23.728 14:28:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:23.728 14:28:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:24:23.728 14:28:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:23.986 [2024-11-06 14:28:51.401061] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:23.986 14:28:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:24.244 14:28:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:24.244 14:28:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:24.244 14:28:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:24.244 14:28:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:24.501 14:28:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:24.757 [2024-11-06 14:28:52.202136] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:24.757 14:28:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:24:25.015 14:28:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:24:25.015 14:28:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:24:25.015 14:28:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:24:25.015 14:28:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:24:25.947 Initializing NVMe Controllers 00:24:25.947 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:24:25.947 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:24:25.947 Initialization complete. Launching workers. 00:24:25.947 ======================================================== 00:24:25.947 Latency(us) 00:24:25.947 Device Information : IOPS MiB/s Average min max 00:24:25.947 PCIE (0000:00:10.0) NSID 1 from core 0: 32094.30 125.37 996.81 243.26 6436.09 00:24:25.947 ======================================================== 00:24:25.947 Total : 32094.30 125.37 996.81 243.26 6436.09 00:24:25.947 00:24:26.205 14:28:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:24:27.580 Initializing NVMe Controllers 00:24:27.580 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:24:27.580 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:27.580 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:27.580 Initialization complete. Launching workers. 00:24:27.580 ======================================================== 00:24:27.580 Latency(us) 00:24:27.580 Device Information : IOPS MiB/s Average min max 00:24:27.580 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5512.86 21.53 180.42 71.66 7077.67 00:24:27.580 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.00 0.48 8128.20 4977.54 12047.53 00:24:27.580 ======================================================== 00:24:27.580 Total : 5636.85 22.02 355.25 71.66 12047.53 00:24:27.580 00:24:27.580 14:28:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:24:28.954 Initializing NVMe Controllers 00:24:28.954 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:24:28.954 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:28.954 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:28.954 Initialization complete. Launching workers. 00:24:28.954 ======================================================== 00:24:28.954 Latency(us) 00:24:28.954 Device Information : IOPS MiB/s Average min max 00:24:28.954 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10510.10 41.06 3044.84 416.98 6589.41 00:24:28.954 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2720.86 10.63 11888.82 7390.75 20220.14 00:24:28.954 ======================================================== 00:24:28.954 Total : 13230.96 51.68 4863.55 416.98 20220.14 00:24:28.954 00:24:28.954 14:28:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:24:28.955 14:28:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:24:31.492 Initializing NVMe Controllers 00:24:31.493 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:24:31.493 Controller IO queue size 128, less than required. 00:24:31.493 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:31.493 Controller IO queue size 128, less than required. 00:24:31.493 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:31.493 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:31.493 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:31.493 Initialization complete. Launching workers. 00:24:31.493 ======================================================== 00:24:31.493 Latency(us) 00:24:31.493 Device Information : IOPS MiB/s Average min max 00:24:31.493 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2312.24 578.06 56586.27 33282.14 174882.49 00:24:31.493 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 544.44 136.11 249862.66 86117.62 463469.56 00:24:31.493 ======================================================== 00:24:31.493 Total : 2856.68 714.17 93421.76 33282.14 463469.56 00:24:31.493 00:24:31.751 14:28:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:24:32.009 Initializing NVMe Controllers 00:24:32.009 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:24:32.009 Controller IO queue size 128, less than required. 00:24:32.009 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:32.009 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:32.009 Controller IO queue size 128, less than required. 00:24:32.009 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:32.009 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:24:32.009 WARNING: Some requested NVMe devices were skipped 00:24:32.009 No valid NVMe controllers or AIO or URING devices found 00:24:32.009 14:28:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:24:34.536 Initializing NVMe Controllers 00:24:34.536 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:24:34.536 Controller IO queue size 128, less than required. 00:24:34.536 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:34.536 Controller IO queue size 128, less than required. 00:24:34.536 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:34.536 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:34.536 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:34.536 Initialization complete. Launching workers. 00:24:34.536 00:24:34.536 ==================== 00:24:34.536 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:34.536 TCP transport: 00:24:34.536 polls: 9808 00:24:34.536 idle_polls: 4554 00:24:34.536 sock_completions: 5254 00:24:34.536 nvme_completions: 7703 00:24:34.536 submitted_requests: 11490 00:24:34.536 queued_requests: 1 00:24:34.536 00:24:34.536 ==================== 00:24:34.536 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:34.536 TCP transport: 00:24:34.536 polls: 10593 00:24:34.536 idle_polls: 6211 00:24:34.536 sock_completions: 4382 00:24:34.536 nvme_completions: 8293 00:24:34.536 submitted_requests: 12498 00:24:34.536 queued_requests: 1 00:24:34.536 ======================================================== 00:24:34.536 Latency(us) 00:24:34.536 Device Information : IOPS MiB/s Average min max 00:24:34.536 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1921.43 480.36 69578.71 43008.49 234856.03 00:24:34.537 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2068.62 517.16 61052.46 28553.70 163509.30 00:24:34.537 ======================================================== 00:24:34.537 Total : 3990.06 997.51 65158.33 28553.70 234856.03 00:24:34.537 00:24:34.795 14:29:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:34.795 14:29:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:35.054 14:29:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:24:35.054 14:29:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:00:10.0 ']' 00:24:35.054 14:29:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:24:35.054 14:29:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=30106423-c5c4-4259-9d91-9d30ecaaf50d 00:24:35.054 14:29:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 30106423-c5c4-4259-9d91-9d30ecaaf50d 00:24:35.054 14:29:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local lvs_uuid=30106423-c5c4-4259-9d91-9d30ecaaf50d 00:24:35.054 14:29:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local lvs_info 00:24:35.054 14:29:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local fc 00:24:35.054 14:29:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local cs 00:24:35.312 14:29:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:35.312 14:29:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # lvs_info='[ 00:24:35.312 { 00:24:35.312 "base_bdev": "Nvme0n1", 00:24:35.312 "block_size": 4096, 00:24:35.312 "cluster_size": 4194304, 00:24:35.312 "free_clusters": 1278, 00:24:35.312 "name": "lvs_0", 00:24:35.312 "total_data_clusters": 1278, 00:24:35.312 "uuid": "30106423-c5c4-4259-9d91-9d30ecaaf50d" 00:24:35.312 } 00:24:35.312 ]' 00:24:35.312 14:29:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # jq '.[] | select(.uuid=="30106423-c5c4-4259-9d91-9d30ecaaf50d") .free_clusters' 00:24:35.312 14:29:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # fc=1278 00:24:35.312 14:29:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # jq '.[] | select(.uuid=="30106423-c5c4-4259-9d91-9d30ecaaf50d") .cluster_size' 00:24:35.570 14:29:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # cs=4194304 00:24:35.570 14:29:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1375 -- # free_mb=5112 00:24:35.570 5112 00:24:35.570 14:29:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1376 -- # echo 5112 00:24:35.570 14:29:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:24:35.570 14:29:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 30106423-c5c4-4259-9d91-9d30ecaaf50d lbd_0 5112 00:24:35.570 14:29:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=6a300fdc-7c37-4ce5-abf0-100ac94b3155 00:24:35.570 14:29:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore 6a300fdc-7c37-4ce5-abf0-100ac94b3155 lvs_n_0 00:24:35.828 14:29:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=47ab562f-0240-4bfa-a6cc-29d52db78dd2 00:24:35.828 14:29:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 47ab562f-0240-4bfa-a6cc-29d52db78dd2 00:24:35.828 14:29:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local lvs_uuid=47ab562f-0240-4bfa-a6cc-29d52db78dd2 00:24:35.828 14:29:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local lvs_info 00:24:35.828 14:29:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local fc 00:24:35.828 14:29:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local cs 00:24:35.828 14:29:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:36.088 14:29:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # lvs_info='[ 00:24:36.088 { 00:24:36.088 "base_bdev": "Nvme0n1", 00:24:36.088 "block_size": 4096, 00:24:36.088 "cluster_size": 4194304, 00:24:36.088 "free_clusters": 0, 00:24:36.088 "name": "lvs_0", 00:24:36.088 "total_data_clusters": 1278, 00:24:36.088 "uuid": "30106423-c5c4-4259-9d91-9d30ecaaf50d" 00:24:36.088 }, 00:24:36.088 { 00:24:36.088 "base_bdev": "6a300fdc-7c37-4ce5-abf0-100ac94b3155", 00:24:36.088 "block_size": 4096, 00:24:36.088 "cluster_size": 4194304, 00:24:36.088 "free_clusters": 1276, 00:24:36.088 "name": "lvs_n_0", 00:24:36.088 "total_data_clusters": 1276, 00:24:36.088 "uuid": "47ab562f-0240-4bfa-a6cc-29d52db78dd2" 00:24:36.088 } 00:24:36.088 ]' 00:24:36.088 14:29:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # jq '.[] | select(.uuid=="47ab562f-0240-4bfa-a6cc-29d52db78dd2") .free_clusters' 00:24:36.088 14:29:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # fc=1276 00:24:36.088 14:29:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # jq '.[] | select(.uuid=="47ab562f-0240-4bfa-a6cc-29d52db78dd2") .cluster_size' 00:24:36.088 14:29:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # cs=4194304 00:24:36.088 14:29:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1375 -- # free_mb=5104 00:24:36.088 5104 00:24:36.088 14:29:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1376 -- # echo 5104 00:24:36.088 14:29:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:24:36.088 14:29:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 47ab562f-0240-4bfa-a6cc-29d52db78dd2 lbd_nest_0 5104 00:24:36.346 14:29:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=abf404e6-9d42-49c0-8916-2c72ebfc7416 00:24:36.346 14:29:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:36.605 14:29:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:24:36.605 14:29:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 abf404e6-9d42-49c0-8916-2c72ebfc7416 00:24:36.863 14:29:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:37.122 14:29:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:24:37.122 14:29:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:24:37.122 14:29:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:24:37.122 14:29:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:24:37.122 14:29:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:24:37.380 Initializing NVMe Controllers 00:24:37.380 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:24:37.380 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:24:37.380 WARNING: Some requested NVMe devices were skipped 00:24:37.380 No valid NVMe controllers or AIO or URING devices found 00:24:37.380 14:29:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:24:37.380 14:29:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:24:49.586 Initializing NVMe Controllers 00:24:49.586 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:24:49.586 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:49.586 Initialization complete. Launching workers. 00:24:49.587 ======================================================== 00:24:49.587 Latency(us) 00:24:49.587 Device Information : IOPS MiB/s Average min max 00:24:49.587 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1206.64 150.83 828.40 232.04 12091.17 00:24:49.587 ======================================================== 00:24:49.587 Total : 1206.64 150.83 828.40 232.04 12091.17 00:24:49.587 00:24:49.587 14:29:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:24:49.587 14:29:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:24:49.587 14:29:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:24:49.587 Initializing NVMe Controllers 00:24:49.587 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:24:49.587 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:24:49.587 WARNING: Some requested NVMe devices were skipped 00:24:49.587 No valid NVMe controllers or AIO or URING devices found 00:24:49.587 14:29:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:24:49.587 14:29:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:24:59.557 Initializing NVMe Controllers 00:24:59.557 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:24:59.557 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:59.557 Initialization complete. Launching workers. 00:24:59.557 ======================================================== 00:24:59.557 Latency(us) 00:24:59.557 Device Information : IOPS MiB/s Average min max 00:24:59.557 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1129.04 141.13 28359.82 7914.26 272187.75 00:24:59.557 ======================================================== 00:24:59.557 Total : 1129.04 141.13 28359.82 7914.26 272187.75 00:24:59.557 00:24:59.557 14:29:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:24:59.557 14:29:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:24:59.557 14:29:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:24:59.557 Initializing NVMe Controllers 00:24:59.557 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:24:59.557 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:24:59.557 WARNING: Some requested NVMe devices were skipped 00:24:59.557 No valid NVMe controllers or AIO or URING devices found 00:24:59.557 14:29:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:24:59.557 14:29:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:25:09.524 Initializing NVMe Controllers 00:25:09.524 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:25:09.524 Controller IO queue size 128, less than required. 00:25:09.524 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:09.524 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:09.524 Initialization complete. Launching workers. 00:25:09.524 ======================================================== 00:25:09.524 Latency(us) 00:25:09.524 Device Information : IOPS MiB/s Average min max 00:25:09.524 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4690.83 586.35 27303.14 7090.74 69401.69 00:25:09.524 ======================================================== 00:25:09.524 Total : 4690.83 586.35 27303.14 7090.74 69401.69 00:25:09.524 00:25:09.524 14:29:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:09.524 14:29:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete abf404e6-9d42-49c0-8916-2c72ebfc7416 00:25:09.782 14:29:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:25:10.041 14:29:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 6a300fdc-7c37-4ce5-abf0-100ac94b3155 00:25:10.300 14:29:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:25:10.300 14:29:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:25:10.300 14:29:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:25:10.300 14:29:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:10.300 14:29:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:25:10.300 14:29:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:10.300 14:29:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:25:10.300 14:29:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:10.300 14:29:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:10.300 rmmod nvme_tcp 00:25:10.300 rmmod nvme_fabrics 00:25:10.300 rmmod nvme_keyring 00:25:10.300 14:29:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:10.558 14:29:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:25:10.558 14:29:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:25:10.558 14:29:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 95503 ']' 00:25:10.558 14:29:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 95503 00:25:10.558 14:29:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@952 -- # '[' -z 95503 ']' 00:25:10.558 14:29:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # kill -0 95503 00:25:10.558 14:29:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # uname 00:25:10.558 14:29:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:10.558 14:29:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 95503 00:25:10.558 killing process with pid 95503 00:25:10.558 14:29:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:10.558 14:29:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:10.558 14:29:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 95503' 00:25:10.558 14:29:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@971 -- # kill 95503 00:25:10.558 14:29:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@976 -- # wait 95503 00:25:14.816 14:29:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:14.816 14:29:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:14.816 14:29:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:14.816 14:29:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:25:14.816 14:29:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:25:14.816 14:29:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:14.816 14:29:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:25:14.816 14:29:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:14.816 14:29:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:25:14.816 14:29:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:25:14.816 14:29:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:25:14.816 14:29:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:25:14.816 14:29:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:25:15.077 14:29:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:25:15.077 14:29:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:25:15.077 14:29:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:25:15.077 14:29:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:25:15.077 14:29:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:25:15.077 14:29:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:25:15.077 14:29:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:25:15.077 14:29:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:15.077 14:29:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:15.077 14:29:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:25:15.077 14:29:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:15.077 14:29:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:15.077 14:29:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:15.077 14:29:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:25:15.077 00:25:15.077 real 0m53.708s 00:25:15.077 user 3m20.881s 00:25:15.077 sys 0m8.878s 00:25:15.077 14:29:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:15.077 ************************************ 00:25:15.077 END TEST nvmf_perf 00:25:15.077 ************************************ 00:25:15.077 14:29:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:15.077 14:29:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:15.077 14:29:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:15.077 14:29:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:15.077 14:29:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.077 ************************************ 00:25:15.077 START TEST nvmf_fio_host 00:25:15.077 ************************************ 00:25:15.077 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:15.336 * Looking for test storage... 00:25:15.336 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:15.336 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:15.336 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lcov --version 00:25:15.336 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:15.336 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:15.336 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:15.336 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:15.336 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:15.336 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:25:15.336 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:25:15.336 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:25:15.336 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:25:15.336 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:25:15.336 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:25:15.336 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:25:15.336 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:15.336 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:25:15.336 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:25:15.336 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:15.336 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:15.336 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:25:15.336 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:25:15.336 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:15.336 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:25:15.336 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:25:15.336 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:25:15.336 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:25:15.336 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:15.336 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:25:15.336 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:25:15.336 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:15.336 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:15.336 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:25:15.336 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:15.336 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:15.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:15.336 --rc genhtml_branch_coverage=1 00:25:15.336 --rc genhtml_function_coverage=1 00:25:15.336 --rc genhtml_legend=1 00:25:15.336 --rc geninfo_all_blocks=1 00:25:15.336 --rc geninfo_unexecuted_blocks=1 00:25:15.336 00:25:15.336 ' 00:25:15.336 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:15.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:15.336 --rc genhtml_branch_coverage=1 00:25:15.336 --rc genhtml_function_coverage=1 00:25:15.336 --rc genhtml_legend=1 00:25:15.336 --rc geninfo_all_blocks=1 00:25:15.336 --rc geninfo_unexecuted_blocks=1 00:25:15.336 00:25:15.336 ' 00:25:15.336 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:15.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:15.336 --rc genhtml_branch_coverage=1 00:25:15.336 --rc genhtml_function_coverage=1 00:25:15.336 --rc genhtml_legend=1 00:25:15.336 --rc geninfo_all_blocks=1 00:25:15.336 --rc geninfo_unexecuted_blocks=1 00:25:15.336 00:25:15.336 ' 00:25:15.336 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:15.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:15.336 --rc genhtml_branch_coverage=1 00:25:15.336 --rc genhtml_function_coverage=1 00:25:15.336 --rc genhtml_legend=1 00:25:15.336 --rc geninfo_all_blocks=1 00:25:15.336 --rc geninfo_unexecuted_blocks=1 00:25:15.336 00:25:15.336 ' 00:25:15.336 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:15.336 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:15.336 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:15.336 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:15.336 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:15.336 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.336 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:15.337 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:25:15.337 Cannot find device "nvmf_init_br" 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:25:15.337 Cannot find device "nvmf_init_br2" 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:25:15.337 Cannot find device "nvmf_tgt_br" 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:25:15.337 Cannot find device "nvmf_tgt_br2" 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:25:15.337 Cannot find device "nvmf_init_br" 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:25:15.337 Cannot find device "nvmf_init_br2" 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:25:15.337 Cannot find device "nvmf_tgt_br" 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:25:15.337 Cannot find device "nvmf_tgt_br2" 00:25:15.337 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:25:15.338 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:25:15.338 Cannot find device "nvmf_br" 00:25:15.338 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:25:15.338 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:25:15.338 Cannot find device "nvmf_init_if" 00:25:15.338 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:25:15.338 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:25:15.338 Cannot find device "nvmf_init_if2" 00:25:15.338 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:25:15.338 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:15.338 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:15.338 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:25:15.338 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:15.338 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:15.338 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:25:15.338 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:25:15.338 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:15.338 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:25:15.338 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:15.598 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:15.598 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:15.598 14:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:15.598 14:29:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:15.598 14:29:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:25:15.598 14:29:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:25:15.598 14:29:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:25:15.598 14:29:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:25:15.598 14:29:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:25:15.598 14:29:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:25:15.598 14:29:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:25:15.598 14:29:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:25:15.598 14:29:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:25:15.598 14:29:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:15.598 14:29:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:15.598 14:29:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:15.598 14:29:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:25:15.598 14:29:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:25:15.598 14:29:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:25:15.598 14:29:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:25:15.598 14:29:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:15.598 14:29:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:15.598 14:29:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:15.598 14:29:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:25:15.598 14:29:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:25:15.598 14:29:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:25:15.598 14:29:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:15.598 14:29:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:15.598 14:29:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:25:15.598 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:15.598 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:25:15.598 00:25:15.598 --- 10.0.0.3 ping statistics --- 00:25:15.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:15.598 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:25:15.598 14:29:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:25:15.598 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:15.598 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:25:15.598 00:25:15.598 --- 10.0.0.4 ping statistics --- 00:25:15.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:15.598 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:25:15.598 14:29:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:15.598 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:15.598 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 00:25:15.598 00:25:15.598 --- 10.0.0.1 ping statistics --- 00:25:15.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:15.598 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 00:25:15.598 14:29:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:25:15.598 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:15.598 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.036 ms 00:25:15.598 00:25:15.598 --- 10.0.0.2 ping statistics --- 00:25:15.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:15.598 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:25:15.598 14:29:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:15.598 14:29:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@461 -- # return 0 00:25:15.598 14:29:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:15.598 14:29:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:15.598 14:29:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:15.598 14:29:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:15.598 14:29:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:15.598 14:29:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:15.598 14:29:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:15.598 14:29:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:25:15.598 14:29:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:25:15.598 14:29:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:15.598 14:29:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.598 14:29:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=96550 00:25:15.598 14:29:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:15.598 14:29:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 96550 00:25:15.598 14:29:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@833 -- # '[' -z 96550 ']' 00:25:15.598 14:29:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:15.598 14:29:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:15.598 14:29:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:15.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:15.599 14:29:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:15.599 14:29:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:15.599 14:29:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.599 [2024-11-06 14:29:43.211674] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:25:15.599 [2024-11-06 14:29:43.211777] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:15.857 [2024-11-06 14:29:43.368156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:15.857 [2024-11-06 14:29:43.466630] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:15.857 [2024-11-06 14:29:43.466676] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:15.857 [2024-11-06 14:29:43.466688] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:15.857 [2024-11-06 14:29:43.466695] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:15.857 [2024-11-06 14:29:43.466704] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:15.857 [2024-11-06 14:29:43.468157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:15.857 [2024-11-06 14:29:43.468861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:15.857 [2024-11-06 14:29:43.469146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:15.857 [2024-11-06 14:29:43.469238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:16.424 14:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:16.424 14:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@866 -- # return 0 00:25:16.424 14:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:16.681 [2024-11-06 14:29:44.236737] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:16.681 14:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:25:16.681 14:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:16.681 14:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.681 14:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:25:16.940 Malloc1 00:25:16.940 14:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:17.198 14:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:17.456 14:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:25:17.456 [2024-11-06 14:29:45.076530] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:17.713 14:29:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:25:17.713 14:29:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:25:17.713 14:29:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:25:17.713 14:29:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:25:17.713 14:29:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:25:17.713 14:29:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:17.713 14:29:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:25:17.713 14:29:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:25:17.713 14:29:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:25:17.713 14:29:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:25:17.713 14:29:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:25:17.713 14:29:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:25:17.713 14:29:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:25:17.713 14:29:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:25:17.713 14:29:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:25:17.713 14:29:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:25:17.713 14:29:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # break 00:25:17.713 14:29:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:25:17.713 14:29:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:25:17.972 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:25:17.972 fio-3.35 00:25:17.972 Starting 1 thread 00:25:20.502 00:25:20.502 test: (groupid=0, jobs=1): err= 0: pid=96663: Wed Nov 6 14:29:47 2024 00:25:20.503 read: IOPS=11.9k, BW=46.4MiB/s (48.7MB/s)(93.1MiB/2005msec) 00:25:20.503 slat (usec): min=2, max=1412, avg= 2.28, stdev= 9.44 00:25:20.503 clat (usec): min=3185, max=10298, avg=5639.74, stdev=390.88 00:25:20.503 lat (usec): min=3218, max=10300, avg=5642.02, stdev=391.21 00:25:20.503 clat percentiles (usec): 00:25:20.503 | 1.00th=[ 4817], 5.00th=[ 5080], 10.00th=[ 5211], 20.00th=[ 5342], 00:25:20.503 | 30.00th=[ 5473], 40.00th=[ 5538], 50.00th=[ 5604], 60.00th=[ 5735], 00:25:20.503 | 70.00th=[ 5800], 80.00th=[ 5932], 90.00th=[ 6063], 95.00th=[ 6194], 00:25:20.503 | 99.00th=[ 6587], 99.50th=[ 7177], 99.90th=[ 8586], 99.95th=[ 9503], 00:25:20.503 | 99.99th=[10028] 00:25:20.503 bw ( KiB/s): min=46768, max=48328, per=99.97%, avg=47548.00, stdev=756.48, samples=4 00:25:20.503 iops : min=11692, max=12082, avg=11887.00, stdev=189.12, samples=4 00:25:20.503 write: IOPS=11.8k, BW=46.2MiB/s (48.5MB/s)(92.7MiB/2005msec); 0 zone resets 00:25:20.503 slat (usec): min=2, max=233, avg= 2.30, stdev= 1.74 00:25:20.503 clat (usec): min=2409, max=9725, avg=5117.49, stdev=349.46 00:25:20.503 lat (usec): min=2423, max=9727, avg=5119.79, stdev=349.49 00:25:20.503 clat percentiles (usec): 00:25:20.503 | 1.00th=[ 4359], 5.00th=[ 4621], 10.00th=[ 4752], 20.00th=[ 4883], 00:25:20.503 | 30.00th=[ 4948], 40.00th=[ 5014], 50.00th=[ 5145], 60.00th=[ 5211], 00:25:20.503 | 70.00th=[ 5276], 80.00th=[ 5342], 90.00th=[ 5473], 95.00th=[ 5604], 00:25:20.503 | 99.00th=[ 5866], 99.50th=[ 6456], 99.90th=[ 7963], 99.95th=[ 9110], 00:25:20.503 | 99.99th=[ 9634] 00:25:20.503 bw ( KiB/s): min=47232, max=47488, per=100.00%, avg=47346.00, stdev=108.20, samples=4 00:25:20.503 iops : min=11808, max=11872, avg=11836.50, stdev=27.05, samples=4 00:25:20.503 lat (msec) : 4=0.24%, 10=99.76%, 20=0.01% 00:25:20.503 cpu : usr=77.35%, sys=16.62%, ctx=41, majf=0, minf=1556 00:25:20.503 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:25:20.503 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.503 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:20.503 issued rwts: total=23841,23733,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:20.503 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:20.503 00:25:20.503 Run status group 0 (all jobs): 00:25:20.503 READ: bw=46.4MiB/s (48.7MB/s), 46.4MiB/s-46.4MiB/s (48.7MB/s-48.7MB/s), io=93.1MiB (97.7MB), run=2005-2005msec 00:25:20.503 WRITE: bw=46.2MiB/s (48.5MB/s), 46.2MiB/s-46.2MiB/s (48.5MB/s-48.5MB/s), io=92.7MiB (97.2MB), run=2005-2005msec 00:25:20.503 ----------------------------------------------------- 00:25:20.503 Suppressions used: 00:25:20.503 count bytes template 00:25:20.503 1 57 /usr/src/fio/parse.c 00:25:20.503 1 8 libtcmalloc_minimal.so 00:25:20.503 ----------------------------------------------------- 00:25:20.503 00:25:20.503 14:29:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:25:20.503 14:29:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:25:20.503 14:29:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:25:20.503 14:29:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:20.503 14:29:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:25:20.503 14:29:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:25:20.503 14:29:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:25:20.503 14:29:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:25:20.503 14:29:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:25:20.503 14:29:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:25:20.503 14:29:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:25:20.503 14:29:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:25:20.503 14:29:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:25:20.503 14:29:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:25:20.503 14:29:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # break 00:25:20.503 14:29:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:25:20.503 14:29:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:25:20.503 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:25:20.503 fio-3.35 00:25:20.503 Starting 1 thread 00:25:23.027 00:25:23.027 test: (groupid=0, jobs=1): err= 0: pid=96705: Wed Nov 6 14:29:50 2024 00:25:23.027 read: IOPS=10.6k, BW=165MiB/s (173MB/s)(331MiB/2007msec) 00:25:23.027 slat (usec): min=2, max=125, avg= 3.01, stdev= 1.45 00:25:23.027 clat (usec): min=1108, max=14025, avg=7067.97, stdev=1829.18 00:25:23.027 lat (usec): min=1111, max=14029, avg=7070.97, stdev=1829.38 00:25:23.027 clat percentiles (usec): 00:25:23.027 | 1.00th=[ 3556], 5.00th=[ 4293], 10.00th=[ 4752], 20.00th=[ 5473], 00:25:23.027 | 30.00th=[ 5932], 40.00th=[ 6456], 50.00th=[ 7046], 60.00th=[ 7570], 00:25:23.027 | 70.00th=[ 7898], 80.00th=[ 8455], 90.00th=[ 9503], 95.00th=[10290], 00:25:23.028 | 99.00th=[11863], 99.50th=[12518], 99.90th=[13435], 99.95th=[13566], 00:25:23.028 | 99.99th=[13960] 00:25:23.028 bw ( KiB/s): min=72448, max=95552, per=50.79%, avg=85768.00, stdev=11082.00, samples=4 00:25:23.028 iops : min= 4528, max= 5972, avg=5360.50, stdev=692.62, samples=4 00:25:23.028 write: IOPS=6158, BW=96.2MiB/s (101MB/s)(175MiB/1818msec); 0 zone resets 00:25:23.028 slat (usec): min=27, max=158, avg=31.34, stdev= 5.39 00:25:23.028 clat (usec): min=2828, max=14291, avg=8715.79, stdev=1548.63 00:25:23.028 lat (usec): min=2866, max=14320, avg=8747.13, stdev=1549.99 00:25:23.028 clat percentiles (usec): 00:25:23.028 | 1.00th=[ 5800], 5.00th=[ 6390], 10.00th=[ 6783], 20.00th=[ 7308], 00:25:23.028 | 30.00th=[ 7767], 40.00th=[ 8160], 50.00th=[ 8586], 60.00th=[ 9110], 00:25:23.028 | 70.00th=[ 9503], 80.00th=[10159], 90.00th=[10814], 95.00th=[11338], 00:25:23.028 | 99.00th=[12518], 99.50th=[12780], 99.90th=[13173], 99.95th=[13435], 00:25:23.028 | 99.99th=[13698] 00:25:23.028 bw ( KiB/s): min=76608, max=98880, per=90.63%, avg=89312.00, stdev=11076.81, samples=4 00:25:23.028 iops : min= 4788, max= 6180, avg=5582.00, stdev=692.30, samples=4 00:25:23.028 lat (msec) : 2=0.05%, 4=1.89%, 10=86.35%, 20=11.71% 00:25:23.028 cpu : usr=81.06%, sys=12.96%, ctx=17, majf=0, minf=2538 00:25:23.028 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:25:23.028 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:23.028 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:23.028 issued rwts: total=21183,11197,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:23.028 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:23.028 00:25:23.028 Run status group 0 (all jobs): 00:25:23.028 READ: bw=165MiB/s (173MB/s), 165MiB/s-165MiB/s (173MB/s-173MB/s), io=331MiB (347MB), run=2007-2007msec 00:25:23.028 WRITE: bw=96.2MiB/s (101MB/s), 96.2MiB/s-96.2MiB/s (101MB/s-101MB/s), io=175MiB (183MB), run=1818-1818msec 00:25:23.028 ----------------------------------------------------- 00:25:23.028 Suppressions used: 00:25:23.028 count bytes template 00:25:23.028 1 57 /usr/src/fio/parse.c 00:25:23.028 763 73248 /usr/src/fio/iolog.c 00:25:23.028 1 8 libtcmalloc_minimal.so 00:25:23.028 ----------------------------------------------------- 00:25:23.028 00:25:23.028 14:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:23.287 14:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:25:23.287 14:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:25:23.287 14:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:25:23.287 14:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # bdfs=() 00:25:23.287 14:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # local bdfs 00:25:23.287 14:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:25:23.287 14:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:25:23.287 14:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:25:23.287 14:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:25:23.287 14:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:25:23.287 14:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 -i 10.0.0.3 00:25:23.545 Nvme0n1 00:25:23.545 14:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:25:23.804 14:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=65a66429-47e7-4a46-8265-a3c614196045 00:25:23.804 14:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 65a66429-47e7-4a46-8265-a3c614196045 00:25:23.804 14:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local lvs_uuid=65a66429-47e7-4a46-8265-a3c614196045 00:25:23.804 14:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local lvs_info 00:25:23.804 14:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local fc 00:25:23.804 14:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local cs 00:25:23.804 14:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:24.063 14:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # lvs_info='[ 00:25:24.063 { 00:25:24.063 "base_bdev": "Nvme0n1", 00:25:24.063 "block_size": 4096, 00:25:24.063 "cluster_size": 1073741824, 00:25:24.063 "free_clusters": 4, 00:25:24.063 "name": "lvs_0", 00:25:24.063 "total_data_clusters": 4, 00:25:24.063 "uuid": "65a66429-47e7-4a46-8265-a3c614196045" 00:25:24.063 } 00:25:24.063 ]' 00:25:24.063 14:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # jq '.[] | select(.uuid=="65a66429-47e7-4a46-8265-a3c614196045") .free_clusters' 00:25:24.063 14:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # fc=4 00:25:24.063 14:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # jq '.[] | select(.uuid=="65a66429-47e7-4a46-8265-a3c614196045") .cluster_size' 00:25:24.063 14:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # cs=1073741824 00:25:24.063 4096 00:25:24.063 14:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1375 -- # free_mb=4096 00:25:24.063 14:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1376 -- # echo 4096 00:25:24.063 14:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:25:24.321 8d6e9c12-481c-4438-875d-72178aa96e6f 00:25:24.321 14:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:25:24.579 14:29:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:25:24.579 14:29:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:25:24.837 14:29:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:25:24.837 14:29:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:25:24.837 14:29:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:25:24.837 14:29:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:24.837 14:29:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:25:24.837 14:29:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:25:24.837 14:29:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:25:24.837 14:29:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:25:24.837 14:29:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:25:24.837 14:29:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:25:24.837 14:29:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:25:24.838 14:29:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:25:24.838 14:29:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:25:24.838 14:29:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:25:24.838 14:29:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # break 00:25:24.838 14:29:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:25:24.838 14:29:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:25:25.097 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:25:25.097 fio-3.35 00:25:25.097 Starting 1 thread 00:25:27.624 00:25:27.624 test: (groupid=0, jobs=1): err= 0: pid=96849: Wed Nov 6 14:29:54 2024 00:25:27.624 read: IOPS=8384, BW=32.8MiB/s (34.3MB/s)(65.7MiB/2007msec) 00:25:27.624 slat (nsec): min=1583, max=404584, avg=2200.00, stdev=3996.67 00:25:27.624 clat (usec): min=4143, max=13975, avg=8001.23, stdev=622.80 00:25:27.624 lat (usec): min=4156, max=13977, avg=8003.43, stdev=622.66 00:25:27.624 clat percentiles (usec): 00:25:27.624 | 1.00th=[ 6652], 5.00th=[ 7046], 10.00th=[ 7242], 20.00th=[ 7504], 00:25:27.624 | 30.00th=[ 7701], 40.00th=[ 7832], 50.00th=[ 7963], 60.00th=[ 8160], 00:25:27.624 | 70.00th=[ 8291], 80.00th=[ 8455], 90.00th=[ 8717], 95.00th=[ 8979], 00:25:27.624 | 99.00th=[ 9372], 99.50th=[ 9634], 99.90th=[11600], 99.95th=[12911], 00:25:27.624 | 99.99th=[13698] 00:25:27.624 bw ( KiB/s): min=32800, max=33976, per=99.91%, avg=33508.00, stdev=537.95, samples=4 00:25:27.624 iops : min= 8200, max= 8494, avg=8377.00, stdev=134.49, samples=4 00:25:27.624 write: IOPS=8377, BW=32.7MiB/s (34.3MB/s)(65.7MiB/2007msec); 0 zone resets 00:25:27.624 slat (nsec): min=1650, max=314468, avg=2263.58, stdev=2692.68 00:25:27.624 clat (usec): min=3101, max=13771, avg=7225.48, stdev=577.22 00:25:27.624 lat (usec): min=3117, max=13773, avg=7227.74, stdev=577.12 00:25:27.624 clat percentiles (usec): 00:25:27.624 | 1.00th=[ 5932], 5.00th=[ 6325], 10.00th=[ 6521], 20.00th=[ 6783], 00:25:27.624 | 30.00th=[ 6915], 40.00th=[ 7111], 50.00th=[ 7242], 60.00th=[ 7373], 00:25:27.624 | 70.00th=[ 7504], 80.00th=[ 7701], 90.00th=[ 7898], 95.00th=[ 8094], 00:25:27.624 | 99.00th=[ 8455], 99.50th=[ 8717], 99.90th=[10290], 99.95th=[11731], 00:25:27.624 | 99.99th=[13698] 00:25:27.624 bw ( KiB/s): min=33280, max=33696, per=100.00%, avg=33522.00, stdev=198.38, samples=4 00:25:27.624 iops : min= 8320, max= 8424, avg=8380.50, stdev=49.60, samples=4 00:25:27.624 lat (msec) : 4=0.02%, 10=99.80%, 20=0.18% 00:25:27.624 cpu : usr=78.36%, sys=17.40%, ctx=6, majf=0, minf=1557 00:25:27.624 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:25:27.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:27.624 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:27.624 issued rwts: total=16828,16814,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:27.624 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:27.624 00:25:27.624 Run status group 0 (all jobs): 00:25:27.624 READ: bw=32.8MiB/s (34.3MB/s), 32.8MiB/s-32.8MiB/s (34.3MB/s-34.3MB/s), io=65.7MiB (68.9MB), run=2007-2007msec 00:25:27.624 WRITE: bw=32.7MiB/s (34.3MB/s), 32.7MiB/s-32.7MiB/s (34.3MB/s-34.3MB/s), io=65.7MiB (68.9MB), run=2007-2007msec 00:25:27.624 ----------------------------------------------------- 00:25:27.624 Suppressions used: 00:25:27.624 count bytes template 00:25:27.624 1 58 /usr/src/fio/parse.c 00:25:27.624 1 8 libtcmalloc_minimal.so 00:25:27.624 ----------------------------------------------------- 00:25:27.624 00:25:27.624 14:29:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:27.882 14:29:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:25:27.882 14:29:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=dc47f9d7-05b8-48f0-b4b3-5b4f00da43ce 00:25:27.882 14:29:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb dc47f9d7-05b8-48f0-b4b3-5b4f00da43ce 00:25:27.882 14:29:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local lvs_uuid=dc47f9d7-05b8-48f0-b4b3-5b4f00da43ce 00:25:27.882 14:29:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local lvs_info 00:25:27.882 14:29:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local fc 00:25:27.882 14:29:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local cs 00:25:27.882 14:29:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:28.140 14:29:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # lvs_info='[ 00:25:28.140 { 00:25:28.140 "base_bdev": "Nvme0n1", 00:25:28.140 "block_size": 4096, 00:25:28.140 "cluster_size": 1073741824, 00:25:28.140 "free_clusters": 0, 00:25:28.140 "name": "lvs_0", 00:25:28.140 "total_data_clusters": 4, 00:25:28.140 "uuid": "65a66429-47e7-4a46-8265-a3c614196045" 00:25:28.140 }, 00:25:28.140 { 00:25:28.140 "base_bdev": "8d6e9c12-481c-4438-875d-72178aa96e6f", 00:25:28.140 "block_size": 4096, 00:25:28.140 "cluster_size": 4194304, 00:25:28.140 "free_clusters": 1022, 00:25:28.140 "name": "lvs_n_0", 00:25:28.140 "total_data_clusters": 1022, 00:25:28.140 "uuid": "dc47f9d7-05b8-48f0-b4b3-5b4f00da43ce" 00:25:28.140 } 00:25:28.140 ]' 00:25:28.140 14:29:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # jq '.[] | select(.uuid=="dc47f9d7-05b8-48f0-b4b3-5b4f00da43ce") .free_clusters' 00:25:28.140 14:29:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # fc=1022 00:25:28.140 14:29:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # jq '.[] | select(.uuid=="dc47f9d7-05b8-48f0-b4b3-5b4f00da43ce") .cluster_size' 00:25:28.140 4088 00:25:28.140 14:29:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # cs=4194304 00:25:28.140 14:29:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1375 -- # free_mb=4088 00:25:28.140 14:29:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1376 -- # echo 4088 00:25:28.140 14:29:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:25:28.399 2c51dd62-93af-447c-8228-2ea95f808f51 00:25:28.399 14:29:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:25:28.657 14:29:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:25:28.916 14:29:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:25:28.916 14:29:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:25:28.916 14:29:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:25:28.916 14:29:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:25:28.916 14:29:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:28.916 14:29:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:25:28.916 14:29:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:25:28.916 14:29:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:25:28.916 14:29:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:25:28.916 14:29:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:25:28.916 14:29:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:25:28.916 14:29:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:25:28.916 14:29:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:25:28.916 14:29:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:25:28.916 14:29:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:25:28.916 14:29:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # break 00:25:28.916 14:29:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:25:28.916 14:29:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:25:29.174 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:25:29.174 fio-3.35 00:25:29.174 Starting 1 thread 00:25:31.707 00:25:31.707 test: (groupid=0, jobs=1): err= 0: pid=96957: Wed Nov 6 14:29:58 2024 00:25:31.707 read: IOPS=6775, BW=26.5MiB/s (27.8MB/s)(53.2MiB/2009msec) 00:25:31.707 slat (nsec): min=1593, max=165393, avg=1799.83, stdev=1792.62 00:25:31.707 clat (usec): min=2855, max=20774, avg=9895.14, stdev=1666.61 00:25:31.707 lat (usec): min=2868, max=20775, avg=9896.94, stdev=1666.48 00:25:31.707 clat percentiles (usec): 00:25:31.707 | 1.00th=[ 7635], 5.00th=[ 8029], 10.00th=[ 8291], 20.00th=[ 8586], 00:25:31.707 | 30.00th=[ 8848], 40.00th=[ 9110], 50.00th=[ 9372], 60.00th=[ 9765], 00:25:31.707 | 70.00th=[10421], 80.00th=[11338], 90.00th=[12125], 95.00th=[12780], 00:25:31.707 | 99.00th=[14746], 99.50th=[17171], 99.90th=[20317], 99.95th=[20579], 00:25:31.707 | 99.99th=[20841] 00:25:31.707 bw ( KiB/s): min=22880, max=30256, per=99.91%, avg=27076.00, stdev=3190.26, samples=4 00:25:31.707 iops : min= 5720, max= 7564, avg=6769.00, stdev=797.57, samples=4 00:25:31.707 write: IOPS=6770, BW=26.4MiB/s (27.7MB/s)(53.1MiB/2009msec); 0 zone resets 00:25:31.707 slat (nsec): min=1631, max=78151, avg=1858.03, stdev=1040.58 00:25:31.707 clat (usec): min=1902, max=19429, avg=8874.52, stdev=1482.56 00:25:31.707 lat (usec): min=1919, max=19431, avg=8876.38, stdev=1482.45 00:25:31.707 clat percentiles (usec): 00:25:31.707 | 1.00th=[ 6718], 5.00th=[ 7242], 10.00th=[ 7439], 20.00th=[ 7767], 00:25:31.707 | 30.00th=[ 7963], 40.00th=[ 8225], 50.00th=[ 8455], 60.00th=[ 8717], 00:25:31.707 | 70.00th=[ 9372], 80.00th=[10159], 90.00th=[10945], 95.00th=[11469], 00:25:31.707 | 99.00th=[13173], 99.50th=[14353], 99.90th=[18744], 99.95th=[19268], 00:25:31.707 | 99.99th=[19530] 00:25:31.707 bw ( KiB/s): min=23168, max=29760, per=100.00%, avg=27094.00, stdev=3177.72, samples=4 00:25:31.707 iops : min= 5792, max= 7440, avg=6773.50, stdev=794.43, samples=4 00:25:31.707 lat (msec) : 2=0.01%, 4=0.11%, 10=71.30%, 20=28.52%, 50=0.06% 00:25:31.707 cpu : usr=81.82%, sys=15.14%, ctx=11, majf=0, minf=1556 00:25:31.707 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:25:31.707 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:31.707 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:31.707 issued rwts: total=13611,13601,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:31.707 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:31.707 00:25:31.707 Run status group 0 (all jobs): 00:25:31.707 READ: bw=26.5MiB/s (27.8MB/s), 26.5MiB/s-26.5MiB/s (27.8MB/s-27.8MB/s), io=53.2MiB (55.8MB), run=2009-2009msec 00:25:31.707 WRITE: bw=26.4MiB/s (27.7MB/s), 26.4MiB/s-26.4MiB/s (27.7MB/s-27.7MB/s), io=53.1MiB (55.7MB), run=2009-2009msec 00:25:31.707 ----------------------------------------------------- 00:25:31.707 Suppressions used: 00:25:31.707 count bytes template 00:25:31.707 1 58 /usr/src/fio/parse.c 00:25:31.707 1 8 libtcmalloc_minimal.so 00:25:31.707 ----------------------------------------------------- 00:25:31.707 00:25:31.707 14:29:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:25:31.966 14:29:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:25:31.966 14:29:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:25:32.224 14:29:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:25:32.224 14:29:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:25:32.482 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:25:32.740 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:25:35.269 14:30:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:25:35.269 14:30:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:25:35.269 14:30:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:25:35.269 14:30:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:35.269 14:30:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:25:35.269 14:30:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:35.269 14:30:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:25:35.269 14:30:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:35.269 14:30:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:35.269 rmmod nvme_tcp 00:25:35.269 rmmod nvme_fabrics 00:25:35.269 rmmod nvme_keyring 00:25:35.269 14:30:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:35.269 14:30:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:25:35.269 14:30:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:25:35.269 14:30:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 96550 ']' 00:25:35.269 14:30:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 96550 00:25:35.269 14:30:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@952 -- # '[' -z 96550 ']' 00:25:35.269 14:30:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # kill -0 96550 00:25:35.269 14:30:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # uname 00:25:35.269 14:30:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:35.269 14:30:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 96550 00:25:35.269 killing process with pid 96550 00:25:35.269 14:30:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:35.269 14:30:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:35.269 14:30:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 96550' 00:25:35.269 14:30:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@971 -- # kill 96550 00:25:35.269 14:30:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@976 -- # wait 96550 00:25:35.836 14:30:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:35.836 14:30:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:35.836 14:30:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:35.836 14:30:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:25:35.836 14:30:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:25:35.836 14:30:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:35.836 14:30:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:25:35.836 14:30:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:35.836 14:30:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:25:35.836 14:30:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:25:35.836 14:30:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:25:36.095 14:30:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:25:36.095 14:30:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:25:36.095 14:30:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:25:36.095 14:30:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:25:36.095 14:30:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:25:36.095 14:30:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:25:36.095 14:30:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:25:36.095 14:30:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:25:36.095 14:30:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:25:36.095 14:30:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:36.095 14:30:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:36.095 14:30:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:25:36.095 14:30:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:36.095 14:30:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:36.095 14:30:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:36.095 14:30:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:25:36.095 00:25:36.095 real 0m21.016s 00:25:36.095 user 1m28.721s 00:25:36.095 sys 0m3.586s 00:25:36.095 14:30:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:36.095 14:30:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.095 ************************************ 00:25:36.095 END TEST nvmf_fio_host 00:25:36.095 ************************************ 00:25:36.095 14:30:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:36.095 14:30:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:36.095 14:30:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:36.096 14:30:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.096 ************************************ 00:25:36.096 START TEST nvmf_failover 00:25:36.096 ************************************ 00:25:36.096 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:36.355 * Looking for test storage... 00:25:36.355 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:36.355 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:36.355 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lcov --version 00:25:36.355 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:36.355 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:36.355 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:36.355 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:36.355 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:36.355 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:25:36.355 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:25:36.355 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:25:36.355 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:25:36.355 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:25:36.355 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:25:36.355 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:25:36.355 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:36.355 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:25:36.355 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:25:36.355 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:36.355 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:36.355 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:25:36.355 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:25:36.355 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:36.355 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:25:36.355 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:25:36.355 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:25:36.355 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:25:36.355 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:36.355 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:25:36.355 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:25:36.355 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:36.355 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:36.355 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:25:36.355 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:36.355 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:36.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:36.355 --rc genhtml_branch_coverage=1 00:25:36.355 --rc genhtml_function_coverage=1 00:25:36.355 --rc genhtml_legend=1 00:25:36.355 --rc geninfo_all_blocks=1 00:25:36.355 --rc geninfo_unexecuted_blocks=1 00:25:36.355 00:25:36.355 ' 00:25:36.355 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:36.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:36.355 --rc genhtml_branch_coverage=1 00:25:36.355 --rc genhtml_function_coverage=1 00:25:36.355 --rc genhtml_legend=1 00:25:36.355 --rc geninfo_all_blocks=1 00:25:36.355 --rc geninfo_unexecuted_blocks=1 00:25:36.355 00:25:36.355 ' 00:25:36.355 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:36.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:36.355 --rc genhtml_branch_coverage=1 00:25:36.355 --rc genhtml_function_coverage=1 00:25:36.355 --rc genhtml_legend=1 00:25:36.355 --rc geninfo_all_blocks=1 00:25:36.355 --rc geninfo_unexecuted_blocks=1 00:25:36.355 00:25:36.355 ' 00:25:36.355 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:36.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:36.356 --rc genhtml_branch_coverage=1 00:25:36.356 --rc genhtml_function_coverage=1 00:25:36.356 --rc genhtml_legend=1 00:25:36.356 --rc geninfo_all_blocks=1 00:25:36.356 --rc geninfo_unexecuted_blocks=1 00:25:36.356 00:25:36.356 ' 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:36.356 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@460 -- # nvmf_veth_init 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:25:36.356 Cannot find device "nvmf_init_br" 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:25:36.356 Cannot find device "nvmf_init_br2" 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:25:36.356 Cannot find device "nvmf_tgt_br" 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:25:36.356 Cannot find device "nvmf_tgt_br2" 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:25:36.356 Cannot find device "nvmf_init_br" 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:25:36.356 Cannot find device "nvmf_init_br2" 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:25:36.356 Cannot find device "nvmf_tgt_br" 00:25:36.356 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:25:36.357 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:25:36.357 Cannot find device "nvmf_tgt_br2" 00:25:36.357 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:25:36.357 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:25:36.357 Cannot find device "nvmf_br" 00:25:36.357 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:25:36.357 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:25:36.357 Cannot find device "nvmf_init_if" 00:25:36.357 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:25:36.357 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:25:36.357 Cannot find device "nvmf_init_if2" 00:25:36.357 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:25:36.357 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:36.357 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:36.357 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:25:36.357 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:36.357 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:36.357 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:25:36.357 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:25:36.357 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:36.357 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:25:36.616 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:36.616 14:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:36.616 14:30:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:36.616 14:30:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:36.616 14:30:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:36.616 14:30:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:25:36.616 14:30:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:25:36.616 14:30:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:25:36.616 14:30:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:25:36.616 14:30:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:25:36.616 14:30:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:25:36.616 14:30:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:25:36.616 14:30:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:25:36.616 14:30:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:25:36.616 14:30:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:36.616 14:30:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:36.616 14:30:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:36.616 14:30:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:25:36.616 14:30:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:25:36.616 14:30:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:25:36.616 14:30:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:25:36.616 14:30:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:36.616 14:30:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:36.616 14:30:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:36.616 14:30:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:25:36.616 14:30:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:25:36.616 14:30:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:25:36.616 14:30:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:36.616 14:30:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:36.616 14:30:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:25:36.616 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:36.616 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:25:36.616 00:25:36.616 --- 10.0.0.3 ping statistics --- 00:25:36.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:36.616 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:25:36.616 14:30:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:25:36.616 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:36.616 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.027 ms 00:25:36.616 00:25:36.616 --- 10.0.0.4 ping statistics --- 00:25:36.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:36.616 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:25:36.616 14:30:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:36.616 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:36.616 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:25:36.616 00:25:36.616 --- 10.0.0.1 ping statistics --- 00:25:36.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:36.616 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:25:36.616 14:30:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:25:36.616 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:36.616 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:25:36.616 00:25:36.616 --- 10.0.0.2 ping statistics --- 00:25:36.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:36.616 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:25:36.616 14:30:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:36.616 14:30:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@461 -- # return 0 00:25:36.616 14:30:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:36.616 14:30:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:36.616 14:30:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:36.616 14:30:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:36.616 14:30:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:36.616 14:30:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:36.616 14:30:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:36.616 14:30:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:25:36.616 14:30:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:36.616 14:30:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:36.616 14:30:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:36.616 14:30:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=97296 00:25:36.616 14:30:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:36.616 14:30:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 97296 00:25:36.616 14:30:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 97296 ']' 00:25:36.616 14:30:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:36.616 14:30:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:36.616 14:30:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:36.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:36.616 14:30:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:36.616 14:30:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:36.616 [2024-11-06 14:30:04.244033] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:25:36.616 [2024-11-06 14:30:04.244122] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:36.877 [2024-11-06 14:30:04.399103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:36.877 [2024-11-06 14:30:04.498831] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:36.877 [2024-11-06 14:30:04.498881] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:36.877 [2024-11-06 14:30:04.498893] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:36.877 [2024-11-06 14:30:04.498902] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:36.877 [2024-11-06 14:30:04.498913] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:36.877 [2024-11-06 14:30:04.500412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:36.877 [2024-11-06 14:30:04.500707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:36.877 [2024-11-06 14:30:04.500792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:37.817 14:30:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:37.817 14:30:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:25:37.817 14:30:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:37.817 14:30:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:37.818 14:30:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:37.818 14:30:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:37.818 14:30:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:37.818 [2024-11-06 14:30:05.301476] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:37.818 14:30:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:38.076 Malloc0 00:25:38.076 14:30:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:38.337 14:30:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:38.337 14:30:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:25:38.598 [2024-11-06 14:30:06.140820] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:38.598 14:30:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:25:38.860 [2024-11-06 14:30:06.340917] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:25:38.860 14:30:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:25:39.121 [2024-11-06 14:30:06.541092] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:25:39.121 14:30:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:25:39.121 14:30:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=97402 00:25:39.121 14:30:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:39.121 14:30:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 97402 /var/tmp/bdevperf.sock 00:25:39.121 14:30:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 97402 ']' 00:25:39.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:39.121 14:30:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:39.121 14:30:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:39.121 14:30:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:39.121 14:30:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:39.121 14:30:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:40.056 14:30:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:40.056 14:30:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:25:40.056 14:30:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:40.056 NVMe0n1 00:25:40.056 14:30:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:40.315 00:25:40.315 14:30:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=97450 00:25:40.315 14:30:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:25:40.315 14:30:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:41.691 14:30:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:25:41.691 [2024-11-06 14:30:09.067181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:25:41.691 [2024-11-06 14:30:09.067231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:25:41.691 [2024-11-06 14:30:09.067239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:25:41.691 [2024-11-06 14:30:09.067245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:25:41.691 [2024-11-06 14:30:09.067250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:25:41.691 [2024-11-06 14:30:09.067256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:25:41.691 [2024-11-06 14:30:09.067262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:25:41.691 [2024-11-06 14:30:09.067267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:25:41.691 [2024-11-06 14:30:09.067272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:25:41.691 [2024-11-06 14:30:09.067277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:25:41.691 [2024-11-06 14:30:09.067283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:25:41.691 [2024-11-06 14:30:09.067288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:25:41.691 [2024-11-06 14:30:09.067293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:25:41.691 [2024-11-06 14:30:09.067299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:25:41.691 [2024-11-06 14:30:09.067304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:25:41.691 [2024-11-06 14:30:09.067309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:25:41.691 14:30:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:25:45.013 14:30:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:45.013 00:25:45.013 14:30:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:25:45.013 [2024-11-06 14:30:12.582078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:25:45.013 [2024-11-06 14:30:12.582120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:25:45.013 [2024-11-06 14:30:12.582127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:25:45.013 [2024-11-06 14:30:12.582133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:25:45.013 [2024-11-06 14:30:12.582139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:25:45.013 [2024-11-06 14:30:12.582145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:25:45.013 [2024-11-06 14:30:12.582151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:25:45.013 [2024-11-06 14:30:12.582156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:25:45.013 [2024-11-06 14:30:12.582162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:25:45.013 [2024-11-06 14:30:12.582168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:25:45.013 [2024-11-06 14:30:12.582173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:25:45.013 [2024-11-06 14:30:12.582179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:25:45.013 [2024-11-06 14:30:12.582184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:25:45.013 [2024-11-06 14:30:12.582189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:25:45.013 [2024-11-06 14:30:12.582195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:25:45.013 [2024-11-06 14:30:12.582200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:25:45.013 [2024-11-06 14:30:12.582206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:25:45.013 [2024-11-06 14:30:12.582211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:25:45.013 [2024-11-06 14:30:12.582217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:25:45.013 [2024-11-06 14:30:12.582223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:25:45.013 [2024-11-06 14:30:12.582228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:25:45.013 [2024-11-06 14:30:12.582234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:25:45.013 [2024-11-06 14:30:12.582240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:25:45.013 [2024-11-06 14:30:12.582245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:25:45.013 [2024-11-06 14:30:12.582250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:25:45.013 [2024-11-06 14:30:12.582256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:25:45.013 [2024-11-06 14:30:12.582261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:25:45.013 [2024-11-06 14:30:12.582267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:25:45.013 [2024-11-06 14:30:12.582274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:25:45.013 [2024-11-06 14:30:12.582279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:25:45.013 [2024-11-06 14:30:12.582285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:25:45.013 14:30:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:25:48.295 14:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:25:48.295 [2024-11-06 14:30:15.792058] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:48.295 14:30:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:25:49.229 14:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:25:49.488 14:30:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 97450 00:25:56.064 { 00:25:56.064 "results": [ 00:25:56.064 { 00:25:56.064 "job": "NVMe0n1", 00:25:56.064 "core_mask": "0x1", 00:25:56.064 "workload": "verify", 00:25:56.064 "status": "finished", 00:25:56.064 "verify_range": { 00:25:56.064 "start": 0, 00:25:56.064 "length": 16384 00:25:56.064 }, 00:25:56.064 "queue_depth": 128, 00:25:56.064 "io_size": 4096, 00:25:56.064 "runtime": 15.003561, 00:25:56.064 "iops": 11171.881128753368, 00:25:56.064 "mibps": 43.640160659192844, 00:25:56.064 "io_failed": 4653, 00:25:56.064 "io_timeout": 0, 00:25:56.064 "avg_latency_us": 11124.963233760047, 00:25:56.064 "min_latency_us": 545.0830769230769, 00:25:56.064 "max_latency_us": 21677.292307692307 00:25:56.064 } 00:25:56.064 ], 00:25:56.064 "core_count": 1 00:25:56.064 } 00:25:56.064 14:30:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 97402 00:25:56.064 14:30:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 97402 ']' 00:25:56.064 14:30:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 97402 00:25:56.064 14:30:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:25:56.064 14:30:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:56.064 14:30:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 97402 00:25:56.064 killing process with pid 97402 00:25:56.064 14:30:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:56.064 14:30:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:56.064 14:30:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 97402' 00:25:56.064 14:30:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 97402 00:25:56.064 14:30:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 97402 00:25:56.064 14:30:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:25:56.064 [2024-11-06 14:30:06.623391] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:25:56.064 [2024-11-06 14:30:06.623517] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97402 ] 00:25:56.064 [2024-11-06 14:30:06.778470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:56.064 [2024-11-06 14:30:06.858235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:56.064 Running I/O for 15 seconds... 00:25:56.064 11436.00 IOPS, 44.67 MiB/s [2024-11-06T14:30:23.699Z] [2024-11-06 14:30:09.067754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:99760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.064 [2024-11-06 14:30:09.067804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.064 [2024-11-06 14:30:09.067828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:99768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.064 [2024-11-06 14:30:09.067841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.064 [2024-11-06 14:30:09.067854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:99776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.064 [2024-11-06 14:30:09.067865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.064 [2024-11-06 14:30:09.067878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:99784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.064 [2024-11-06 14:30:09.067888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.064 [2024-11-06 14:30:09.067900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:99792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.064 [2024-11-06 14:30:09.067910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.064 [2024-11-06 14:30:09.067922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.064 [2024-11-06 14:30:09.067932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.064 [2024-11-06 14:30:09.067945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:99808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.064 [2024-11-06 14:30:09.067967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.064 [2024-11-06 14:30:09.067979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:99816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.064 [2024-11-06 14:30:09.067989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.064 [2024-11-06 14:30:09.068000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.064 [2024-11-06 14:30:09.068009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.064 [2024-11-06 14:30:09.068021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:99832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.065 [2024-11-06 14:30:09.068030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.065 [2024-11-06 14:30:09.068042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:99840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.065 [2024-11-06 14:30:09.068051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.065 [2024-11-06 14:30:09.068075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:99848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.065 [2024-11-06 14:30:09.068085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.065 [2024-11-06 14:30:09.068096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:99856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.065 [2024-11-06 14:30:09.068105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.065 [2024-11-06 14:30:09.068116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:99864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.065 [2024-11-06 14:30:09.068125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.065 [2024-11-06 14:30:09.068136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:99872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.065 [2024-11-06 14:30:09.068145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.065 [2024-11-06 14:30:09.068160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:99880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.065 [2024-11-06 14:30:09.068169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.065 [2024-11-06 14:30:09.068180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:99888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.065 [2024-11-06 14:30:09.068189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.065 [2024-11-06 14:30:09.068200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:99896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.065 [2024-11-06 14:30:09.068209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.065 [2024-11-06 14:30:09.068220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:99904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.065 [2024-11-06 14:30:09.068229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.065 [2024-11-06 14:30:09.068240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:99912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.065 [2024-11-06 14:30:09.068249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.065 [2024-11-06 14:30:09.068275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:99920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.065 [2024-11-06 14:30:09.068285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.065 [2024-11-06 14:30:09.068295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:99928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.065 [2024-11-06 14:30:09.068305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.065 [2024-11-06 14:30:09.068316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:99936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.065 [2024-11-06 14:30:09.068325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.065 [2024-11-06 14:30:09.068336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:99944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.065 [2024-11-06 14:30:09.068350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.065 [2024-11-06 14:30:09.068361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:99952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.065 [2024-11-06 14:30:09.068370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.065 [2024-11-06 14:30:09.068381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:99960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.065 [2024-11-06 14:30:09.068390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.065 [2024-11-06 14:30:09.068401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:99968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.065 [2024-11-06 14:30:09.068410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.065 [2024-11-06 14:30:09.068421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:99976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.065 [2024-11-06 14:30:09.068430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.065 [2024-11-06 14:30:09.068440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:99984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.065 [2024-11-06 14:30:09.068450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.065 [2024-11-06 14:30:09.068460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:99992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.065 [2024-11-06 14:30:09.068469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.065 [2024-11-06 14:30:09.068480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:100000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.065 [2024-11-06 14:30:09.068489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.065 [2024-11-06 14:30:09.068502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:100008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.065 [2024-11-06 14:30:09.068511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.065 [2024-11-06 14:30:09.068521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:100016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.065 [2024-11-06 14:30:09.068530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.065 [2024-11-06 14:30:09.068541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:100024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.065 [2024-11-06 14:30:09.068550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.065 [2024-11-06 14:30:09.068561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:100032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.065 [2024-11-06 14:30:09.068570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.065 [2024-11-06 14:30:09.068580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:100040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.065 [2024-11-06 14:30:09.068590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.065 [2024-11-06 14:30:09.068604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:100048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.065 [2024-11-06 14:30:09.068613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.065 [2024-11-06 14:30:09.068624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:100056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.065 [2024-11-06 14:30:09.068633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.065 [2024-11-06 14:30:09.068643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:100064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.065 [2024-11-06 14:30:09.068653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.065 [2024-11-06 14:30:09.068663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:100072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.065 [2024-11-06 14:30:09.068673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.065 [2024-11-06 14:30:09.068683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:100080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.065 [2024-11-06 14:30:09.068692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.065 [2024-11-06 14:30:09.068703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:99256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.065 [2024-11-06 14:30:09.068712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.065 [2024-11-06 14:30:09.068722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.065 [2024-11-06 14:30:09.068732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.065 [2024-11-06 14:30:09.068742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:99272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.065 [2024-11-06 14:30:09.068751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.065 [2024-11-06 14:30:09.068768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:99280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.065 [2024-11-06 14:30:09.068777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.065 [2024-11-06 14:30:09.068793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:99288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.065 [2024-11-06 14:30:09.068802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.065 [2024-11-06 14:30:09.068813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:99296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.065 [2024-11-06 14:30:09.068821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.065 [2024-11-06 14:30:09.068835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:99304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.065 [2024-11-06 14:30:09.068844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.065 [2024-11-06 14:30:09.068855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:99312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.065 [2024-11-06 14:30:09.068867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.065 [2024-11-06 14:30:09.068878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:99320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.066 [2024-11-06 14:30:09.068888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.066 [2024-11-06 14:30:09.068899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:99328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.066 [2024-11-06 14:30:09.068908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.066 [2024-11-06 14:30:09.068918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:99336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.066 [2024-11-06 14:30:09.068927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.066 [2024-11-06 14:30:09.068938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:99344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.066 [2024-11-06 14:30:09.068958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.066 [2024-11-06 14:30:09.068969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:99352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.066 [2024-11-06 14:30:09.068977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.066 [2024-11-06 14:30:09.068988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:99360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.066 [2024-11-06 14:30:09.068997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.066 [2024-11-06 14:30:09.069008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:99368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.066 [2024-11-06 14:30:09.069016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.066 [2024-11-06 14:30:09.069027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:99376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.066 [2024-11-06 14:30:09.069036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.066 [2024-11-06 14:30:09.069047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:99384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.066 [2024-11-06 14:30:09.069057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.066 [2024-11-06 14:30:09.069067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:99392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.066 [2024-11-06 14:30:09.069077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.066 [2024-11-06 14:30:09.069088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:99400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.066 [2024-11-06 14:30:09.069097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.066 [2024-11-06 14:30:09.069108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:99408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.066 [2024-11-06 14:30:09.069117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.066 [2024-11-06 14:30:09.069129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:99416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.066 [2024-11-06 14:30:09.069142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.066 [2024-11-06 14:30:09.069153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:99424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.066 [2024-11-06 14:30:09.069162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.066 [2024-11-06 14:30:09.069174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:99432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.066 [2024-11-06 14:30:09.069183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.066 [2024-11-06 14:30:09.069194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:99440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.066 [2024-11-06 14:30:09.069203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.066 [2024-11-06 14:30:09.069214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:99448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.066 [2024-11-06 14:30:09.069223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.066 [2024-11-06 14:30:09.069233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:99456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.066 [2024-11-06 14:30:09.069242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.066 [2024-11-06 14:30:09.069252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.066 [2024-11-06 14:30:09.069261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.066 [2024-11-06 14:30:09.069272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:99472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.066 [2024-11-06 14:30:09.069280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.066 [2024-11-06 14:30:09.069291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:99480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.066 [2024-11-06 14:30:09.069300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.066 [2024-11-06 14:30:09.069311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:99488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.066 [2024-11-06 14:30:09.069320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.066 [2024-11-06 14:30:09.069331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:99496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.066 [2024-11-06 14:30:09.069340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.066 [2024-11-06 14:30:09.069350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:99504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.066 [2024-11-06 14:30:09.069359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.066 [2024-11-06 14:30:09.069370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:99512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.066 [2024-11-06 14:30:09.069379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.066 [2024-11-06 14:30:09.069393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:99520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.066 [2024-11-06 14:30:09.069403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.066 [2024-11-06 14:30:09.069413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.066 [2024-11-06 14:30:09.069423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.066 [2024-11-06 14:30:09.069433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:99536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.066 [2024-11-06 14:30:09.069442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.066 [2024-11-06 14:30:09.069455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:99544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.066 [2024-11-06 14:30:09.069464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.066 [2024-11-06 14:30:09.069475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:99552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.066 [2024-11-06 14:30:09.069483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.066 [2024-11-06 14:30:09.069495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.066 [2024-11-06 14:30:09.069504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.066 [2024-11-06 14:30:09.069514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:99568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.066 [2024-11-06 14:30:09.069523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.066 [2024-11-06 14:30:09.069534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:100088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.066 [2024-11-06 14:30:09.069543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.066 [2024-11-06 14:30:09.069554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:100096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.066 [2024-11-06 14:30:09.069563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.066 [2024-11-06 14:30:09.069573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:100104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.066 [2024-11-06 14:30:09.069583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.066 [2024-11-06 14:30:09.069593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.066 [2024-11-06 14:30:09.069602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.066 [2024-11-06 14:30:09.069613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:100120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.066 [2024-11-06 14:30:09.069622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.066 [2024-11-06 14:30:09.069633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:100128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.066 [2024-11-06 14:30:09.069645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.066 [2024-11-06 14:30:09.069656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:100136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.066 [2024-11-06 14:30:09.069665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.066 [2024-11-06 14:30:09.069676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.066 [2024-11-06 14:30:09.069685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.066 [2024-11-06 14:30:09.069696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.067 [2024-11-06 14:30:09.069705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.067 [2024-11-06 14:30:09.069715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.067 [2024-11-06 14:30:09.069724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.067 [2024-11-06 14:30:09.069735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:100168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.067 [2024-11-06 14:30:09.069744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.067 [2024-11-06 14:30:09.069754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:100176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.067 [2024-11-06 14:30:09.069763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.067 [2024-11-06 14:30:09.069776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.067 [2024-11-06 14:30:09.069785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.067 [2024-11-06 14:30:09.069796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:100192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.067 [2024-11-06 14:30:09.069804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.067 [2024-11-06 14:30:09.069817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:100200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.067 [2024-11-06 14:30:09.069826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.067 [2024-11-06 14:30:09.069836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:100208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.067 [2024-11-06 14:30:09.069845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.067 [2024-11-06 14:30:09.069856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:100216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.067 [2024-11-06 14:30:09.069865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.067 [2024-11-06 14:30:09.069875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:100224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.067 [2024-11-06 14:30:09.069884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.067 [2024-11-06 14:30:09.069898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:100232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.067 [2024-11-06 14:30:09.069907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.067 [2024-11-06 14:30:09.069918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:100240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.067 [2024-11-06 14:30:09.069927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.067 [2024-11-06 14:30:09.069937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:100248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.067 [2024-11-06 14:30:09.069946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.067 [2024-11-06 14:30:09.069965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:100256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.067 [2024-11-06 14:30:09.069974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.067 [2024-11-06 14:30:09.069985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:100264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.067 [2024-11-06 14:30:09.069993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.067 [2024-11-06 14:30:09.070003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:99576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.067 [2024-11-06 14:30:09.070013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.067 [2024-11-06 14:30:09.070023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:99584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.067 [2024-11-06 14:30:09.070032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.067 [2024-11-06 14:30:09.070043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:99592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.067 [2024-11-06 14:30:09.070051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.067 [2024-11-06 14:30:09.070062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:99600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.067 [2024-11-06 14:30:09.070072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.067 [2024-11-06 14:30:09.070087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:99608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.067 [2024-11-06 14:30:09.070097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.067 [2024-11-06 14:30:09.070109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:99616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.067 [2024-11-06 14:30:09.070118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.067 [2024-11-06 14:30:09.070128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:99624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.067 [2024-11-06 14:30:09.070137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.067 [2024-11-06 14:30:09.070149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:99632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.067 [2024-11-06 14:30:09.070162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.067 [2024-11-06 14:30:09.070172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:99640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.067 [2024-11-06 14:30:09.070182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.067 [2024-11-06 14:30:09.070192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:99648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.067 [2024-11-06 14:30:09.070201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.067 [2024-11-06 14:30:09.070212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:99656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.067 [2024-11-06 14:30:09.070220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.067 [2024-11-06 14:30:09.070231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:99664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.067 [2024-11-06 14:30:09.070240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.067 [2024-11-06 14:30:09.070250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:99672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.067 [2024-11-06 14:30:09.070259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.067 [2024-11-06 14:30:09.070269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:99680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.067 [2024-11-06 14:30:09.070279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.067 [2024-11-06 14:30:09.070289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:99688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.067 [2024-11-06 14:30:09.070298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.067 [2024-11-06 14:30:09.070308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.067 [2024-11-06 14:30:09.070317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.067 [2024-11-06 14:30:09.070328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:99696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.067 [2024-11-06 14:30:09.070336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.067 [2024-11-06 14:30:09.070347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:99704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.067 [2024-11-06 14:30:09.070356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.067 [2024-11-06 14:30:09.070366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:99712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.067 [2024-11-06 14:30:09.070375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.067 [2024-11-06 14:30:09.070386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:99720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.067 [2024-11-06 14:30:09.070395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.067 [2024-11-06 14:30:09.070405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:99728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.067 [2024-11-06 14:30:09.070417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.067 [2024-11-06 14:30:09.070430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:99736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.067 [2024-11-06 14:30:09.070439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.067 [2024-11-06 14:30:09.070449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:99744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.067 [2024-11-06 14:30:09.070458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.067 [2024-11-06 14:30:09.070470] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b780 is same with the state(6) to be set 00:25:56.067 [2024-11-06 14:30:09.070483] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.067 [2024-11-06 14:30:09.070491] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.067 [2024-11-06 14:30:09.070500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99752 len:8 PRP1 0x0 PRP2 0x0 00:25:56.067 [2024-11-06 14:30:09.070510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.067 [2024-11-06 14:30:09.070678] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:25:56.068 [2024-11-06 14:30:09.070727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.068 [2024-11-06 14:30:09.070740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.068 [2024-11-06 14:30:09.070751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.068 [2024-11-06 14:30:09.070760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.068 [2024-11-06 14:30:09.070770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.068 [2024-11-06 14:30:09.070779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.068 [2024-11-06 14:30:09.070790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.068 [2024-11-06 14:30:09.070798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.068 [2024-11-06 14:30:09.070808] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:25:56.068 [2024-11-06 14:30:09.070840] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:25:56.068 [2024-11-06 14:30:09.073562] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:25:56.068 [2024-11-06 14:30:09.102485] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:25:56.068 11300.00 IOPS, 44.14 MiB/s [2024-11-06T14:30:23.703Z] 11355.33 IOPS, 44.36 MiB/s [2024-11-06T14:30:23.703Z] 11418.75 IOPS, 44.60 MiB/s [2024-11-06T14:30:23.703Z] [2024-11-06 14:30:12.583298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:39768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.068 [2024-11-06 14:30:12.583350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.068 [2024-11-06 14:30:12.583390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.068 [2024-11-06 14:30:12.583421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.068 [2024-11-06 14:30:12.583436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:39784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.068 [2024-11-06 14:30:12.583448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.068 [2024-11-06 14:30:12.583462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:39792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.068 [2024-11-06 14:30:12.583474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.068 [2024-11-06 14:30:12.583488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:39800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.068 [2024-11-06 14:30:12.583499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.068 [2024-11-06 14:30:12.583513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:39808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.068 [2024-11-06 14:30:12.583525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.068 [2024-11-06 14:30:12.583539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:39816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.068 [2024-11-06 14:30:12.583551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.068 [2024-11-06 14:30:12.583564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:39824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.068 [2024-11-06 14:30:12.583576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.068 [2024-11-06 14:30:12.583589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:39832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.068 [2024-11-06 14:30:12.583601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.068 [2024-11-06 14:30:12.583614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.068 [2024-11-06 14:30:12.583626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.068 [2024-11-06 14:30:12.583639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:39848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.068 [2024-11-06 14:30:12.583651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.068 [2024-11-06 14:30:12.583665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:39856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.068 [2024-11-06 14:30:12.583676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.068 [2024-11-06 14:30:12.583690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:39864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.068 [2024-11-06 14:30:12.583701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.068 [2024-11-06 14:30:12.583714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:39872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.068 [2024-11-06 14:30:12.583726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.068 [2024-11-06 14:30:12.583745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.068 [2024-11-06 14:30:12.583757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.068 [2024-11-06 14:30:12.583771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:39888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.068 [2024-11-06 14:30:12.583782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.068 [2024-11-06 14:30:12.583796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:39896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.068 [2024-11-06 14:30:12.583808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.068 [2024-11-06 14:30:12.583822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:39904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.068 [2024-11-06 14:30:12.583833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.068 [2024-11-06 14:30:12.583847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:39912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.068 [2024-11-06 14:30:12.583858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.068 [2024-11-06 14:30:12.583872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:39920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.068 [2024-11-06 14:30:12.583884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.068 [2024-11-06 14:30:12.583897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.068 [2024-11-06 14:30:12.583909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.068 [2024-11-06 14:30:12.583923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:39936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.068 [2024-11-06 14:30:12.583935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.068 [2024-11-06 14:30:12.583961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:39944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.068 [2024-11-06 14:30:12.583973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.068 [2024-11-06 14:30:12.583987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:39952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.068 [2024-11-06 14:30:12.583999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.068 [2024-11-06 14:30:12.584013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:39960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.068 [2024-11-06 14:30:12.584024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.068 [2024-11-06 14:30:12.584038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:39968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.068 [2024-11-06 14:30:12.584050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.068 [2024-11-06 14:30:12.584063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:39976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.068 [2024-11-06 14:30:12.584079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.068 [2024-11-06 14:30:12.584093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:39984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.069 [2024-11-06 14:30:12.584105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.069 [2024-11-06 14:30:12.584118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:39992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.069 [2024-11-06 14:30:12.584130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.069 [2024-11-06 14:30:12.584144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:40000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.069 [2024-11-06 14:30:12.584155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.069 [2024-11-06 14:30:12.584169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:40008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.069 [2024-11-06 14:30:12.584182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.069 [2024-11-06 14:30:12.584196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:40016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.069 [2024-11-06 14:30:12.584207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.069 [2024-11-06 14:30:12.584221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:40024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.069 [2024-11-06 14:30:12.584233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.069 [2024-11-06 14:30:12.584270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:40032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.069 [2024-11-06 14:30:12.584283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.069 [2024-11-06 14:30:12.584297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:40040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.069 [2024-11-06 14:30:12.584308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.069 [2024-11-06 14:30:12.584322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:40048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.069 [2024-11-06 14:30:12.584333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.069 [2024-11-06 14:30:12.584347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:40056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.069 [2024-11-06 14:30:12.584358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.069 [2024-11-06 14:30:12.584372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:40064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.069 [2024-11-06 14:30:12.584383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.069 [2024-11-06 14:30:12.584396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.069 [2024-11-06 14:30:12.584408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.069 [2024-11-06 14:30:12.584426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:40080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.069 [2024-11-06 14:30:12.584438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.069 [2024-11-06 14:30:12.584451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:40088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.069 [2024-11-06 14:30:12.584463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.069 [2024-11-06 14:30:12.584476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:40096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.069 [2024-11-06 14:30:12.584488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.069 [2024-11-06 14:30:12.584502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:40104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.069 [2024-11-06 14:30:12.584513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.069 [2024-11-06 14:30:12.584526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:40112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.069 [2024-11-06 14:30:12.584538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.069 [2024-11-06 14:30:12.584551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:40120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.069 [2024-11-06 14:30:12.584563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.069 [2024-11-06 14:30:12.584576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.069 [2024-11-06 14:30:12.584588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.069 [2024-11-06 14:30:12.584601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:40136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.069 [2024-11-06 14:30:12.584613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.069 [2024-11-06 14:30:12.584628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:39368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.069 [2024-11-06 14:30:12.584640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.069 [2024-11-06 14:30:12.584654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:39376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.069 [2024-11-06 14:30:12.584666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.069 [2024-11-06 14:30:12.584680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:39384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.069 [2024-11-06 14:30:12.584691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.069 [2024-11-06 14:30:12.584705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:39392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.069 [2024-11-06 14:30:12.584717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.069 [2024-11-06 14:30:12.584730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:39400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.069 [2024-11-06 14:30:12.584742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.069 [2024-11-06 14:30:12.584760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:39408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.069 [2024-11-06 14:30:12.584772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.069 [2024-11-06 14:30:12.584786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:39416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.069 [2024-11-06 14:30:12.584798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.069 [2024-11-06 14:30:12.584811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:39424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.069 [2024-11-06 14:30:12.584823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.069 [2024-11-06 14:30:12.584836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:39432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.069 [2024-11-06 14:30:12.584848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.069 [2024-11-06 14:30:12.584861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:39440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.069 [2024-11-06 14:30:12.584873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.069 [2024-11-06 14:30:12.584887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:39448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.069 [2024-11-06 14:30:12.584898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.069 [2024-11-06 14:30:12.584912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:39456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.069 [2024-11-06 14:30:12.584923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.069 [2024-11-06 14:30:12.584937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.069 [2024-11-06 14:30:12.584960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.069 [2024-11-06 14:30:12.584975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.069 [2024-11-06 14:30:12.584987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.069 [2024-11-06 14:30:12.585000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:39480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.069 [2024-11-06 14:30:12.585012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.069 [2024-11-06 14:30:12.585026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:39488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.069 [2024-11-06 14:30:12.585038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.069 [2024-11-06 14:30:12.585052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:39496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.069 [2024-11-06 14:30:12.585064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.069 [2024-11-06 14:30:12.585077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:39504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.069 [2024-11-06 14:30:12.585094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.069 [2024-11-06 14:30:12.585108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:39512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.069 [2024-11-06 14:30:12.585119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.069 [2024-11-06 14:30:12.585133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:39520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.070 [2024-11-06 14:30:12.585145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.070 [2024-11-06 14:30:12.585158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:39528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.070 [2024-11-06 14:30:12.585170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.070 [2024-11-06 14:30:12.585183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:39536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.070 [2024-11-06 14:30:12.585195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.070 [2024-11-06 14:30:12.585209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:39544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.070 [2024-11-06 14:30:12.585220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.070 [2024-11-06 14:30:12.585234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:39552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.070 [2024-11-06 14:30:12.585245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.070 [2024-11-06 14:30:12.585259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:39560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.070 [2024-11-06 14:30:12.585270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.070 [2024-11-06 14:30:12.585284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:39568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.070 [2024-11-06 14:30:12.585295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.070 [2024-11-06 14:30:12.585309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:39576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.070 [2024-11-06 14:30:12.585321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.070 [2024-11-06 14:30:12.585334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:39584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.070 [2024-11-06 14:30:12.585346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.070 [2024-11-06 14:30:12.585359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:39592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.070 [2024-11-06 14:30:12.585371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.070 [2024-11-06 14:30:12.585384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:39600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.070 [2024-11-06 14:30:12.585396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.070 [2024-11-06 14:30:12.585414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:39608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.070 [2024-11-06 14:30:12.585426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.070 [2024-11-06 14:30:12.585439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:39616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.070 [2024-11-06 14:30:12.585451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.070 [2024-11-06 14:30:12.585465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:39624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.070 [2024-11-06 14:30:12.585476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.070 [2024-11-06 14:30:12.585490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:39632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.070 [2024-11-06 14:30:12.585506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.070 [2024-11-06 14:30:12.585520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:39640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.070 [2024-11-06 14:30:12.585531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.070 [2024-11-06 14:30:12.585545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.070 [2024-11-06 14:30:12.585557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.070 [2024-11-06 14:30:12.585571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:39656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.070 [2024-11-06 14:30:12.585582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.070 [2024-11-06 14:30:12.585595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:39664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.070 [2024-11-06 14:30:12.585607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.070 [2024-11-06 14:30:12.585621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:39672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.070 [2024-11-06 14:30:12.585632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.070 [2024-11-06 14:30:12.585646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:39680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.070 [2024-11-06 14:30:12.585657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.070 [2024-11-06 14:30:12.585671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:39688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.070 [2024-11-06 14:30:12.585685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.070 [2024-11-06 14:30:12.585700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:39696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.070 [2024-11-06 14:30:12.585711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.070 [2024-11-06 14:30:12.585725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:39704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.070 [2024-11-06 14:30:12.585741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.070 [2024-11-06 14:30:12.585755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:39712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.070 [2024-11-06 14:30:12.585767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.070 [2024-11-06 14:30:12.585781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:39720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.070 [2024-11-06 14:30:12.585792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.070 [2024-11-06 14:30:12.585806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:39728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.070 [2024-11-06 14:30:12.585818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.070 [2024-11-06 14:30:12.585831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:39736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.070 [2024-11-06 14:30:12.585843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.070 [2024-11-06 14:30:12.585856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:39744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.070 [2024-11-06 14:30:12.585867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.070 [2024-11-06 14:30:12.585881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:39752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.070 [2024-11-06 14:30:12.585892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.070 [2024-11-06 14:30:12.585906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:39760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.070 [2024-11-06 14:30:12.585919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.070 [2024-11-06 14:30:12.585939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:40144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.070 [2024-11-06 14:30:12.585960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.070 [2024-11-06 14:30:12.585975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:40152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.070 [2024-11-06 14:30:12.585986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.070 [2024-11-06 14:30:12.586000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:40160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.070 [2024-11-06 14:30:12.586011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.070 [2024-11-06 14:30:12.586025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:40168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.070 [2024-11-06 14:30:12.586036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.070 [2024-11-06 14:30:12.586049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:40176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.070 [2024-11-06 14:30:12.586060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.070 [2024-11-06 14:30:12.586079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.070 [2024-11-06 14:30:12.586091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.070 [2024-11-06 14:30:12.586104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:40192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.070 [2024-11-06 14:30:12.586117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.070 [2024-11-06 14:30:12.586131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:40200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.070 [2024-11-06 14:30:12.586142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.070 [2024-11-06 14:30:12.586155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:40208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.070 [2024-11-06 14:30:12.586167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.070 [2024-11-06 14:30:12.586180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:40216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.071 [2024-11-06 14:30:12.586191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.071 [2024-11-06 14:30:12.586205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:40224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.071 [2024-11-06 14:30:12.586216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.071 [2024-11-06 14:30:12.586230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:40232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.071 [2024-11-06 14:30:12.586241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.071 [2024-11-06 14:30:12.586254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:40240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.071 [2024-11-06 14:30:12.586265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.071 [2024-11-06 14:30:12.586279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.071 [2024-11-06 14:30:12.586290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.071 [2024-11-06 14:30:12.586303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:40256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.071 [2024-11-06 14:30:12.586315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.071 [2024-11-06 14:30:12.586328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:40264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.071 [2024-11-06 14:30:12.586341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.071 [2024-11-06 14:30:12.586355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:40272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.071 [2024-11-06 14:30:12.586366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.071 [2024-11-06 14:30:12.586379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:40280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.071 [2024-11-06 14:30:12.586391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.071 [2024-11-06 14:30:12.586408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:40288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.071 [2024-11-06 14:30:12.586420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.071 [2024-11-06 14:30:12.586434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:40296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.071 [2024-11-06 14:30:12.586445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.071 [2024-11-06 14:30:12.586458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:40304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.071 [2024-11-06 14:30:12.586470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.071 [2024-11-06 14:30:12.586483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:40312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.071 [2024-11-06 14:30:12.586495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.071 [2024-11-06 14:30:12.586508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:40320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.071 [2024-11-06 14:30:12.586522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.071 [2024-11-06 14:30:12.586535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:40328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.071 [2024-11-06 14:30:12.586547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.071 [2024-11-06 14:30:12.586560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:40336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.071 [2024-11-06 14:30:12.586572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.071 [2024-11-06 14:30:12.586586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:40344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.071 [2024-11-06 14:30:12.586597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.071 [2024-11-06 14:30:12.586610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:40352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.071 [2024-11-06 14:30:12.586622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.071 [2024-11-06 14:30:12.586635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:40360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.071 [2024-11-06 14:30:12.586647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.071 [2024-11-06 14:30:12.586660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.071 [2024-11-06 14:30:12.586672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.071 [2024-11-06 14:30:12.586686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:40376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.071 [2024-11-06 14:30:12.586697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.071 [2024-11-06 14:30:12.586727] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.071 [2024-11-06 14:30:12.586743] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.071 [2024-11-06 14:30:12.586755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40384 len:8 PRP1 0x0 PRP2 0x0 00:25:56.071 [2024-11-06 14:30:12.586769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.071 [2024-11-06 14:30:12.586979] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:25:56.071 [2024-11-06 14:30:12.587028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.071 [2024-11-06 14:30:12.587043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.071 [2024-11-06 14:30:12.587056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.071 [2024-11-06 14:30:12.587068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.071 [2024-11-06 14:30:12.587080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.071 [2024-11-06 14:30:12.587092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.071 [2024-11-06 14:30:12.587104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.071 [2024-11-06 14:30:12.587115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.071 [2024-11-06 14:30:12.587128] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:25:56.071 [2024-11-06 14:30:12.587165] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:25:56.071 [2024-11-06 14:30:12.590634] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:25:56.071 [2024-11-06 14:30:12.619547] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:25:56.071 11332.00 IOPS, 44.27 MiB/s [2024-11-06T14:30:23.706Z] 11349.00 IOPS, 44.33 MiB/s [2024-11-06T14:30:23.706Z] 11368.14 IOPS, 44.41 MiB/s [2024-11-06T14:30:23.706Z] 11225.88 IOPS, 43.85 MiB/s [2024-11-06T14:30:23.706Z] 11043.56 IOPS, 43.14 MiB/s [2024-11-06T14:30:23.706Z] [2024-11-06 14:30:17.006336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:33576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.071 [2024-11-06 14:30:17.006383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.071 [2024-11-06 14:30:17.006416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:33584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.071 [2024-11-06 14:30:17.006427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.071 [2024-11-06 14:30:17.006439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:33592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.071 [2024-11-06 14:30:17.006449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.071 [2024-11-06 14:30:17.006460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:33600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.071 [2024-11-06 14:30:17.006470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.071 [2024-11-06 14:30:17.006481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:33608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.071 [2024-11-06 14:30:17.006490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.071 [2024-11-06 14:30:17.006512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:33616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.071 [2024-11-06 14:30:17.006522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.071 [2024-11-06 14:30:17.006533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:33624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.071 [2024-11-06 14:30:17.006542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.071 [2024-11-06 14:30:17.006553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.071 [2024-11-06 14:30:17.006563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.071 [2024-11-06 14:30:17.006574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:33640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.071 [2024-11-06 14:30:17.006583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.071 [2024-11-06 14:30:17.006594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:33648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.071 [2024-11-06 14:30:17.006604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.071 [2024-11-06 14:30:17.006614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:33656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.072 [2024-11-06 14:30:17.006623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.072 [2024-11-06 14:30:17.006634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:33664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.072 [2024-11-06 14:30:17.006643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.072 [2024-11-06 14:30:17.006654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:33672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.072 [2024-11-06 14:30:17.006663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.072 [2024-11-06 14:30:17.006675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:33680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.072 [2024-11-06 14:30:17.006684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.072 [2024-11-06 14:30:17.006695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:33688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.072 [2024-11-06 14:30:17.006704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.072 [2024-11-06 14:30:17.006715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:33696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.072 [2024-11-06 14:30:17.006724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.072 [2024-11-06 14:30:17.006734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:33704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.072 [2024-11-06 14:30:17.006743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.072 [2024-11-06 14:30:17.006754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:33712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.072 [2024-11-06 14:30:17.006770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.072 [2024-11-06 14:30:17.006781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:33720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.072 [2024-11-06 14:30:17.006791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.072 [2024-11-06 14:30:17.006802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.072 [2024-11-06 14:30:17.006811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.072 [2024-11-06 14:30:17.006822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:33736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.072 [2024-11-06 14:30:17.006831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.072 [2024-11-06 14:30:17.006841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.072 [2024-11-06 14:30:17.006851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.072 [2024-11-06 14:30:17.006862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:33752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.072 [2024-11-06 14:30:17.006871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.072 [2024-11-06 14:30:17.006882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:33760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.072 [2024-11-06 14:30:17.006891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.072 [2024-11-06 14:30:17.006901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:33768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.072 [2024-11-06 14:30:17.006911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.072 [2024-11-06 14:30:17.006921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:33776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.072 [2024-11-06 14:30:17.006936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.072 [2024-11-06 14:30:17.006956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:33784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.072 [2024-11-06 14:30:17.006966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.072 [2024-11-06 14:30:17.006977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:33792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.072 [2024-11-06 14:30:17.006986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.072 [2024-11-06 14:30:17.006997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:33800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.072 [2024-11-06 14:30:17.007006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.072 [2024-11-06 14:30:17.007017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:33808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.072 [2024-11-06 14:30:17.007025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.072 [2024-11-06 14:30:17.007041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.072 [2024-11-06 14:30:17.007050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.072 [2024-11-06 14:30:17.007061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:33824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.072 [2024-11-06 14:30:17.007070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.072 [2024-11-06 14:30:17.007081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:33832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.072 [2024-11-06 14:30:17.007090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.072 [2024-11-06 14:30:17.007101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:33840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.072 [2024-11-06 14:30:17.007111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.072 [2024-11-06 14:30:17.007122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:33848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.072 [2024-11-06 14:30:17.007132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.072 [2024-11-06 14:30:17.007143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:33856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.072 [2024-11-06 14:30:17.007152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.072 [2024-11-06 14:30:17.007163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.072 [2024-11-06 14:30:17.007173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.072 [2024-11-06 14:30:17.007184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.072 [2024-11-06 14:30:17.007193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.072 [2024-11-06 14:30:17.007203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:33880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.072 [2024-11-06 14:30:17.007213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.072 [2024-11-06 14:30:17.007224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:33888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.072 [2024-11-06 14:30:17.007233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.072 [2024-11-06 14:30:17.007243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:33896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.072 [2024-11-06 14:30:17.007253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.072 [2024-11-06 14:30:17.007263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:33904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.072 [2024-11-06 14:30:17.007272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.072 [2024-11-06 14:30:17.007283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:33912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.072 [2024-11-06 14:30:17.007295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.072 [2024-11-06 14:30:17.007306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:33920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.072 [2024-11-06 14:30:17.007315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.072 [2024-11-06 14:30:17.007326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:33928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.072 [2024-11-06 14:30:17.007335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.072 [2024-11-06 14:30:17.007345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:33936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.072 [2024-11-06 14:30:17.007354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.072 [2024-11-06 14:30:17.007365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:33944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.072 [2024-11-06 14:30:17.007374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.072 [2024-11-06 14:30:17.007385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:33952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.072 [2024-11-06 14:30:17.007395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.072 [2024-11-06 14:30:17.007405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:33960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.072 [2024-11-06 14:30:17.007415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.072 [2024-11-06 14:30:17.007426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:33968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.072 [2024-11-06 14:30:17.007435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.072 [2024-11-06 14:30:17.007446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:33976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.073 [2024-11-06 14:30:17.007455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.073 [2024-11-06 14:30:17.007465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:33984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.073 [2024-11-06 14:30:17.007474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.073 [2024-11-06 14:30:17.007485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:33992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.073 [2024-11-06 14:30:17.007494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.073 [2024-11-06 14:30:17.007505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.073 [2024-11-06 14:30:17.007514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.073 [2024-11-06 14:30:17.007525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:34008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.073 [2024-11-06 14:30:17.007534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.073 [2024-11-06 14:30:17.007544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.073 [2024-11-06 14:30:17.007557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.073 [2024-11-06 14:30:17.007568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:33264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.073 [2024-11-06 14:30:17.007578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.073 [2024-11-06 14:30:17.007588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:33272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.073 [2024-11-06 14:30:17.007597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.073 [2024-11-06 14:30:17.007608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:33280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.073 [2024-11-06 14:30:17.007617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.073 [2024-11-06 14:30:17.007628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:33288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.073 [2024-11-06 14:30:17.007637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.073 [2024-11-06 14:30:17.007648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:33296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.073 [2024-11-06 14:30:17.007657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.073 [2024-11-06 14:30:17.007667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:33304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.073 [2024-11-06 14:30:17.007676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.073 [2024-11-06 14:30:17.007687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:33312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.073 [2024-11-06 14:30:17.007696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.073 [2024-11-06 14:30:17.007707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:33320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.073 [2024-11-06 14:30:17.007717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.073 [2024-11-06 14:30:17.007727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:33328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.073 [2024-11-06 14:30:17.007736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.073 [2024-11-06 14:30:17.007747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:33336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.073 [2024-11-06 14:30:17.007757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.073 [2024-11-06 14:30:17.007767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:33344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.073 [2024-11-06 14:30:17.007776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.073 [2024-11-06 14:30:17.007787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:33352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.073 [2024-11-06 14:30:17.007796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.073 [2024-11-06 14:30:17.007809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:33360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.073 [2024-11-06 14:30:17.007818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.073 [2024-11-06 14:30:17.007828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:33368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.073 [2024-11-06 14:30:17.007837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.073 [2024-11-06 14:30:17.007848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:33376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.073 [2024-11-06 14:30:17.007856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.073 [2024-11-06 14:30:17.007866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:33384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.073 [2024-11-06 14:30:17.007876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.073 [2024-11-06 14:30:17.007886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:34024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.073 [2024-11-06 14:30:17.007894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.073 [2024-11-06 14:30:17.007905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:33392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.073 [2024-11-06 14:30:17.007914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.073 [2024-11-06 14:30:17.007924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:33400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.073 [2024-11-06 14:30:17.007933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.073 [2024-11-06 14:30:17.007944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:33408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.073 [2024-11-06 14:30:17.007967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.073 [2024-11-06 14:30:17.007978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:33416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.073 [2024-11-06 14:30:17.007987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.073 [2024-11-06 14:30:17.007997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:33424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.073 [2024-11-06 14:30:17.008005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.073 [2024-11-06 14:30:17.008016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:33432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.073 [2024-11-06 14:30:17.008025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.073 [2024-11-06 14:30:17.008041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:33440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.073 [2024-11-06 14:30:17.008050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.073 [2024-11-06 14:30:17.008061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:33448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.073 [2024-11-06 14:30:17.008073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.073 [2024-11-06 14:30:17.008084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:33456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.073 [2024-11-06 14:30:17.008093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.073 [2024-11-06 14:30:17.008103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:33464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.073 [2024-11-06 14:30:17.008112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.073 [2024-11-06 14:30:17.008123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:33472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.073 [2024-11-06 14:30:17.008131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.073 [2024-11-06 14:30:17.008142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:33480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.073 [2024-11-06 14:30:17.008151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.073 [2024-11-06 14:30:17.008161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:33488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.073 [2024-11-06 14:30:17.008170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.074 [2024-11-06 14:30:17.008180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:33496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.074 [2024-11-06 14:30:17.008189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.074 [2024-11-06 14:30:17.008199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:33504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.074 [2024-11-06 14:30:17.008208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.074 [2024-11-06 14:30:17.008218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:34032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.074 [2024-11-06 14:30:17.008227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.074 [2024-11-06 14:30:17.008237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:34040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.074 [2024-11-06 14:30:17.008250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.074 [2024-11-06 14:30:17.008268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.074 [2024-11-06 14:30:17.008277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.074 [2024-11-06 14:30:17.008287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.074 [2024-11-06 14:30:17.008296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.074 [2024-11-06 14:30:17.008306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:34064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.074 [2024-11-06 14:30:17.008315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.074 [2024-11-06 14:30:17.008328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:34072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.074 [2024-11-06 14:30:17.008338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.074 [2024-11-06 14:30:17.008348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:34080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.074 [2024-11-06 14:30:17.008360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.074 [2024-11-06 14:30:17.008371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:34088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.074 [2024-11-06 14:30:17.008380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.074 [2024-11-06 14:30:17.008391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:34096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.074 [2024-11-06 14:30:17.008400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.074 [2024-11-06 14:30:17.008410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:34104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.074 [2024-11-06 14:30:17.008419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.074 [2024-11-06 14:30:17.008429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:34112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.074 [2024-11-06 14:30:17.008438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.074 [2024-11-06 14:30:17.008448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:34120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.074 [2024-11-06 14:30:17.008457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.074 [2024-11-06 14:30:17.008467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:34128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.074 [2024-11-06 14:30:17.008476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.074 [2024-11-06 14:30:17.008486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:34136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.074 [2024-11-06 14:30:17.008495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.074 [2024-11-06 14:30:17.008506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:34144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.074 [2024-11-06 14:30:17.008515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.074 [2024-11-06 14:30:17.008525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:34152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.074 [2024-11-06 14:30:17.008534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.074 [2024-11-06 14:30:17.008544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:34160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.074 [2024-11-06 14:30:17.008553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.074 [2024-11-06 14:30:17.008563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:34168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.074 [2024-11-06 14:30:17.008572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.074 [2024-11-06 14:30:17.008585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:34176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.074 [2024-11-06 14:30:17.008594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.074 [2024-11-06 14:30:17.008605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:34184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.074 [2024-11-06 14:30:17.008614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.074 [2024-11-06 14:30:17.008624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:34192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.074 [2024-11-06 14:30:17.008632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.074 [2024-11-06 14:30:17.008643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:34200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.074 [2024-11-06 14:30:17.008651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.074 [2024-11-06 14:30:17.008662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:34208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.074 [2024-11-06 14:30:17.008672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.074 [2024-11-06 14:30:17.008684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:34216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.074 [2024-11-06 14:30:17.008693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.074 [2024-11-06 14:30:17.008704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:34224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.074 [2024-11-06 14:30:17.008712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.074 [2024-11-06 14:30:17.008723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:34232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.074 [2024-11-06 14:30:17.008732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.074 [2024-11-06 14:30:17.008742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.074 [2024-11-06 14:30:17.008752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.074 [2024-11-06 14:30:17.008762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:34248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.074 [2024-11-06 14:30:17.008771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.074 [2024-11-06 14:30:17.008781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:34256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.074 [2024-11-06 14:30:17.008790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.074 [2024-11-06 14:30:17.008800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:34264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.074 [2024-11-06 14:30:17.008808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.074 [2024-11-06 14:30:17.008819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:34272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.074 [2024-11-06 14:30:17.008831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.074 [2024-11-06 14:30:17.008842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:34280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.074 [2024-11-06 14:30:17.008851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.074 [2024-11-06 14:30:17.008876] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.074 [2024-11-06 14:30:17.008887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33512 len:8 PRP1 0x0 PRP2 0x0 00:25:56.074 [2024-11-06 14:30:17.008897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.074 [2024-11-06 14:30:17.008910] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.074 [2024-11-06 14:30:17.008918] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.074 [2024-11-06 14:30:17.008925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33520 len:8 PRP1 0x0 PRP2 0x0 00:25:56.074 [2024-11-06 14:30:17.008935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.074 [2024-11-06 14:30:17.008944] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.074 [2024-11-06 14:30:17.008958] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.074 [2024-11-06 14:30:17.008965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33528 len:8 PRP1 0x0 PRP2 0x0 00:25:56.074 [2024-11-06 14:30:17.008975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.074 [2024-11-06 14:30:17.008983] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.074 [2024-11-06 14:30:17.008989] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.074 [2024-11-06 14:30:17.008997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33536 len:8 PRP1 0x0 PRP2 0x0 00:25:56.074 [2024-11-06 14:30:17.009008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.075 [2024-11-06 14:30:17.009017] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.075 [2024-11-06 14:30:17.009023] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.075 [2024-11-06 14:30:17.009030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33544 len:8 PRP1 0x0 PRP2 0x0 00:25:56.075 [2024-11-06 14:30:17.009039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.075 [2024-11-06 14:30:17.009048] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.075 [2024-11-06 14:30:17.009054] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.075 [2024-11-06 14:30:17.009061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33552 len:8 PRP1 0x0 PRP2 0x0 00:25:56.075 [2024-11-06 14:30:17.009069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.075 [2024-11-06 14:30:17.009078] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.075 [2024-11-06 14:30:17.009084] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.075 [2024-11-06 14:30:17.009091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33560 len:8 PRP1 0x0 PRP2 0x0 00:25:56.075 [2024-11-06 14:30:17.009099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.075 [2024-11-06 14:30:17.009112] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.075 [2024-11-06 14:30:17.009119] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.075 [2024-11-06 14:30:17.009126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33568 len:8 PRP1 0x0 PRP2 0x0 00:25:56.075 [2024-11-06 14:30:17.009135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.075 [2024-11-06 14:30:17.009292] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:25:56.075 [2024-11-06 14:30:17.009332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.075 [2024-11-06 14:30:17.009344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.075 [2024-11-06 14:30:17.009354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.075 [2024-11-06 14:30:17.009363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.075 [2024-11-06 14:30:17.009373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.075 [2024-11-06 14:30:17.009381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.075 [2024-11-06 14:30:17.009391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.075 [2024-11-06 14:30:17.009400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.075 [2024-11-06 14:30:17.009409] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:25:56.075 [2024-11-06 14:30:17.012069] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:25:56.075 [2024-11-06 14:30:17.012103] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:25:56.075 [2024-11-06 14:30:17.042934] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:25:56.075 11038.60 IOPS, 43.12 MiB/s [2024-11-06T14:30:23.710Z] 11066.00 IOPS, 43.23 MiB/s [2024-11-06T14:30:23.710Z] 11108.17 IOPS, 43.39 MiB/s [2024-11-06T14:30:23.710Z] 11130.85 IOPS, 43.48 MiB/s [2024-11-06T14:30:23.710Z] 11148.93 IOPS, 43.55 MiB/s 00:25:56.075 Latency(us) 00:25:56.075 [2024-11-06T14:30:23.710Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:56.075 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:56.075 Verification LBA range: start 0x0 length 0x4000 00:25:56.075 NVMe0n1 : 15.00 11171.88 43.64 310.13 0.00 11124.96 545.08 21677.29 00:25:56.075 [2024-11-06T14:30:23.710Z] =================================================================================================================== 00:25:56.075 [2024-11-06T14:30:23.710Z] Total : 11171.88 43.64 310.13 0.00 11124.96 545.08 21677.29 00:25:56.075 Received shutdown signal, test time was about 15.000000 seconds 00:25:56.075 00:25:56.075 Latency(us) 00:25:56.075 [2024-11-06T14:30:23.710Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:56.075 [2024-11-06T14:30:23.710Z] =================================================================================================================== 00:25:56.075 [2024-11-06T14:30:23.710Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:56.075 14:30:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:56.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:56.075 14:30:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:25:56.075 14:30:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:25:56.075 14:30:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=97658 00:25:56.075 14:30:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 97658 /var/tmp/bdevperf.sock 00:25:56.075 14:30:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:25:56.075 14:30:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 97658 ']' 00:25:56.075 14:30:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:56.075 14:30:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:56.075 14:30:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:56.075 14:30:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:56.075 14:30:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:57.010 14:30:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:57.010 14:30:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:25:57.010 14:30:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:25:57.010 [2024-11-06 14:30:24.622978] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:25:57.268 14:30:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:25:57.268 [2024-11-06 14:30:24.835088] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:25:57.268 14:30:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:57.526 NVMe0n1 00:25:57.526 14:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:57.784 00:25:57.784 14:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:58.042 00:25:58.042 14:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:58.042 14:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:25:58.300 14:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:58.557 14:30:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:26:01.836 14:30:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:01.836 14:30:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:26:01.836 14:30:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:01.836 14:30:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=97791 00:26:01.836 14:30:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 97791 00:26:03.208 { 00:26:03.208 "results": [ 00:26:03.208 { 00:26:03.208 "job": "NVMe0n1", 00:26:03.208 "core_mask": "0x1", 00:26:03.208 "workload": "verify", 00:26:03.208 "status": "finished", 00:26:03.208 "verify_range": { 00:26:03.208 "start": 0, 00:26:03.208 "length": 16384 00:26:03.208 }, 00:26:03.208 "queue_depth": 128, 00:26:03.208 "io_size": 4096, 00:26:03.208 "runtime": 1.007863, 00:26:03.208 "iops": 11447.984497893067, 00:26:03.208 "mibps": 44.71868944489479, 00:26:03.208 "io_failed": 0, 00:26:03.208 "io_timeout": 0, 00:26:03.208 "avg_latency_us": 11131.787027214421, 00:26:03.208 "min_latency_us": 1531.273846153846, 00:26:03.208 "max_latency_us": 12300.603076923077 00:26:03.208 } 00:26:03.208 ], 00:26:03.208 "core_count": 1 00:26:03.208 } 00:26:03.208 14:30:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:26:03.208 [2024-11-06 14:30:23.671439] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:26:03.208 [2024-11-06 14:30:23.671565] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97658 ] 00:26:03.208 [2024-11-06 14:30:23.831163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:03.208 [2024-11-06 14:30:23.927581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:03.208 [2024-11-06 14:30:26.077111] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:26:03.208 [2024-11-06 14:30:26.077206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:03.208 [2024-11-06 14:30:26.077231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.208 [2024-11-06 14:30:26.077247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:03.209 [2024-11-06 14:30:26.077258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.209 [2024-11-06 14:30:26.077270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:03.209 [2024-11-06 14:30:26.077280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.209 [2024-11-06 14:30:26.077291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:03.209 [2024-11-06 14:30:26.077301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.209 [2024-11-06 14:30:26.077311] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:26:03.209 [2024-11-06 14:30:26.077366] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:26:03.209 [2024-11-06 14:30:26.077391] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:26:03.209 [2024-11-06 14:30:26.087273] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:26:03.209 Running I/O for 1 seconds... 00:26:03.209 11409.00 IOPS, 44.57 MiB/s 00:26:03.209 Latency(us) 00:26:03.209 [2024-11-06T14:30:30.844Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:03.209 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:03.209 Verification LBA range: start 0x0 length 0x4000 00:26:03.209 NVMe0n1 : 1.01 11447.98 44.72 0.00 0.00 11131.79 1531.27 12300.60 00:26:03.209 [2024-11-06T14:30:30.844Z] =================================================================================================================== 00:26:03.209 [2024-11-06T14:30:30.844Z] Total : 11447.98 44.72 0.00 0.00 11131.79 1531.27 12300.60 00:26:03.209 14:30:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:03.209 14:30:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:26:03.209 14:30:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:03.209 14:30:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:26:03.209 14:30:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:03.466 14:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:03.724 14:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:26:07.039 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:26:07.039 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:07.039 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 97658 00:26:07.039 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 97658 ']' 00:26:07.039 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 97658 00:26:07.039 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:26:07.039 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:07.039 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 97658 00:26:07.039 killing process with pid 97658 00:26:07.039 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:07.039 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:07.039 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 97658' 00:26:07.039 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 97658 00:26:07.039 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 97658 00:26:07.604 14:30:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:26:07.604 14:30:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:07.862 14:30:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:26:07.862 14:30:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:26:07.862 14:30:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:26:07.862 14:30:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:07.862 14:30:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:26:07.862 14:30:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:07.862 14:30:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:26:07.862 14:30:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:07.862 14:30:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:07.862 rmmod nvme_tcp 00:26:07.862 rmmod nvme_fabrics 00:26:07.862 rmmod nvme_keyring 00:26:07.862 14:30:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:07.862 14:30:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:26:07.862 14:30:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:26:07.862 14:30:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 97296 ']' 00:26:07.862 14:30:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 97296 00:26:07.862 14:30:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 97296 ']' 00:26:07.862 14:30:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 97296 00:26:07.862 14:30:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:26:07.862 14:30:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:07.862 14:30:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 97296 00:26:07.862 killing process with pid 97296 00:26:07.862 14:30:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:07.862 14:30:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:07.862 14:30:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 97296' 00:26:07.862 14:30:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 97296 00:26:07.862 14:30:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 97296 00:26:08.431 14:30:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:08.431 14:30:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:08.431 14:30:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:08.431 14:30:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:26:08.431 14:30:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:08.431 14:30:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:26:08.431 14:30:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:26:08.431 14:30:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:08.431 14:30:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:08.431 14:30:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:08.691 14:30:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:08.692 14:30:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:08.692 14:30:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:08.692 14:30:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:08.692 14:30:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:08.692 14:30:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:08.692 14:30:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:08.692 14:30:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:08.692 14:30:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:08.692 14:30:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:08.692 14:30:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:08.692 14:30:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:08.692 14:30:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:08.692 14:30:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:08.692 14:30:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:08.692 14:30:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:08.692 14:30:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:26:08.692 00:26:08.692 real 0m32.562s 00:26:08.692 user 2m5.506s 00:26:08.692 sys 0m3.502s 00:26:08.692 14:30:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:08.692 14:30:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:08.692 ************************************ 00:26:08.692 END TEST nvmf_failover 00:26:08.692 ************************************ 00:26:08.692 14:30:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:08.692 14:30:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:08.692 14:30:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:08.692 14:30:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.953 ************************************ 00:26:08.953 START TEST nvmf_host_discovery 00:26:08.953 ************************************ 00:26:08.953 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:08.953 * Looking for test storage... 00:26:08.953 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:08.953 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:08.953 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:26:08.953 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:08.953 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:08.953 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:08.953 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:08.953 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:08.953 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:26:08.953 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:26:08.953 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:26:08.953 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:26:08.953 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:26:08.953 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:26:08.953 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:26:08.953 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:08.953 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:26:08.953 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:26:08.953 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:08.953 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:08.953 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:26:08.953 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:26:08.953 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:08.953 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:26:08.953 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:26:08.953 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:26:08.953 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:26:08.953 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:08.953 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:26:08.953 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:26:08.953 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:08.953 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:08.953 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:26:08.953 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:08.953 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:08.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:08.953 --rc genhtml_branch_coverage=1 00:26:08.953 --rc genhtml_function_coverage=1 00:26:08.953 --rc genhtml_legend=1 00:26:08.953 --rc geninfo_all_blocks=1 00:26:08.953 --rc geninfo_unexecuted_blocks=1 00:26:08.953 00:26:08.953 ' 00:26:08.953 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:08.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:08.953 --rc genhtml_branch_coverage=1 00:26:08.953 --rc genhtml_function_coverage=1 00:26:08.953 --rc genhtml_legend=1 00:26:08.953 --rc geninfo_all_blocks=1 00:26:08.953 --rc geninfo_unexecuted_blocks=1 00:26:08.953 00:26:08.953 ' 00:26:08.953 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:08.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:08.953 --rc genhtml_branch_coverage=1 00:26:08.953 --rc genhtml_function_coverage=1 00:26:08.953 --rc genhtml_legend=1 00:26:08.953 --rc geninfo_all_blocks=1 00:26:08.953 --rc geninfo_unexecuted_blocks=1 00:26:08.953 00:26:08.953 ' 00:26:08.953 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:08.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:08.953 --rc genhtml_branch_coverage=1 00:26:08.953 --rc genhtml_function_coverage=1 00:26:08.953 --rc genhtml_legend=1 00:26:08.953 --rc geninfo_all_blocks=1 00:26:08.953 --rc geninfo_unexecuted_blocks=1 00:26:08.953 00:26:08.953 ' 00:26:08.953 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:08.953 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:26:08.953 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:08.953 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:08.953 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:08.953 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:08.953 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:08.953 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:08.953 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:08.953 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:08.953 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:08.953 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:08.953 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:26:08.953 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:26:08.953 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:08.953 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:08.953 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:08.953 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:08.953 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:08.953 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:26:08.953 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:08.953 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:08.953 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:08.953 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:08.953 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:08.954 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:08.954 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:26:08.954 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:08.954 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:26:08.954 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:08.954 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:08.954 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:08.954 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:08.954 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:08.954 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:08.954 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:08.954 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:08.954 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:08.954 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:08.954 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:26:08.954 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:26:08.954 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:26:08.954 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:26:08.954 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:26:08.954 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:26:08.954 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:26:08.954 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:08.954 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:08.954 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:08.954 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:08.954 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:08.954 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:08.954 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:08.954 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:08.954 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:26:08.954 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:26:08.954 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:26:08.954 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:26:08.954 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:26:08.954 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:26:08.954 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:08.954 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:26:08.954 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:26:08.954 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:08.954 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:08.954 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:26:08.954 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:08.954 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:26:08.954 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:08.954 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:26:08.954 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:08.954 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:08.954 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:08.954 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:08.954 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:08.954 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:08.954 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:26:08.954 Cannot find device "nvmf_init_br" 00:26:08.954 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:26:08.954 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:26:08.954 Cannot find device "nvmf_init_br2" 00:26:08.954 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:26:08.954 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:26:08.954 Cannot find device "nvmf_tgt_br" 00:26:08.954 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:26:08.954 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:26:08.954 Cannot find device "nvmf_tgt_br2" 00:26:08.954 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:26:08.954 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:26:08.954 Cannot find device "nvmf_init_br" 00:26:08.954 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:26:08.954 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:26:08.954 Cannot find device "nvmf_init_br2" 00:26:08.954 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:26:08.954 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:26:08.954 Cannot find device "nvmf_tgt_br" 00:26:08.954 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:26:08.954 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:26:08.954 Cannot find device "nvmf_tgt_br2" 00:26:08.954 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:26:08.954 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:26:08.954 Cannot find device "nvmf_br" 00:26:08.954 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:26:08.954 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:26:08.954 Cannot find device "nvmf_init_if" 00:26:08.954 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:26:08.954 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:26:09.215 Cannot find device "nvmf_init_if2" 00:26:09.215 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:26:09.215 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:09.215 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:09.215 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:26:09.215 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:09.215 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:09.215 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:26:09.215 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:26:09.215 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:09.215 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:26:09.215 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:09.215 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:09.215 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:09.215 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:09.215 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:09.215 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:26:09.215 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:26:09.215 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:26:09.215 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:26:09.215 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:26:09.215 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:26:09.215 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:26:09.215 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:26:09.215 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:26:09.215 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:09.215 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:09.215 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:09.215 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:26:09.215 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:26:09.215 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:26:09.215 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:26:09.215 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:09.215 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:09.215 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:09.215 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:26:09.215 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:26:09.215 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:26:09.215 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:09.215 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:09.215 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:26:09.215 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:09.215 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.089 ms 00:26:09.215 00:26:09.215 --- 10.0.0.3 ping statistics --- 00:26:09.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:09.215 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:26:09.215 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:26:09.215 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:09.215 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.034 ms 00:26:09.215 00:26:09.215 --- 10.0.0.4 ping statistics --- 00:26:09.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:09.215 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:26:09.215 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:09.215 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:09.215 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:26:09.215 00:26:09.215 --- 10.0.0.1 ping statistics --- 00:26:09.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:09.215 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:26:09.215 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:26:09.215 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:09.215 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:26:09.215 00:26:09.215 --- 10.0.0.2 ping statistics --- 00:26:09.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:09.215 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:26:09.215 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:09.215 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@461 -- # return 0 00:26:09.215 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:09.215 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:09.215 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:09.215 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:09.215 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:09.215 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:09.215 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:09.215 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:26:09.215 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:09.215 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:09.215 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:09.215 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=98156 00:26:09.215 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 98156 00:26:09.215 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 98156 ']' 00:26:09.215 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:09.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:09.215 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:09.215 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:09.215 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:09.216 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:09.216 14:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:09.477 [2024-11-06 14:30:36.876000] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:26:09.477 [2024-11-06 14:30:36.876101] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:09.477 [2024-11-06 14:30:37.032512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:09.737 [2024-11-06 14:30:37.145913] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:09.737 [2024-11-06 14:30:37.145967] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:09.738 [2024-11-06 14:30:37.145980] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:09.738 [2024-11-06 14:30:37.145996] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:09.738 [2024-11-06 14:30:37.146006] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:09.738 [2024-11-06 14:30:37.146884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:10.306 14:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:10.306 14:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:26:10.306 14:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:10.306 14:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:10.306 14:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:10.306 14:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:10.306 14:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:10.306 14:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.306 14:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:10.306 [2024-11-06 14:30:37.757793] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:10.306 14:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.306 14:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:26:10.306 14:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.306 14:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:10.306 [2024-11-06 14:30:37.765976] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:26:10.306 14:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.306 14:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:26:10.306 14:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.306 14:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:10.306 null0 00:26:10.306 14:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.306 14:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:26:10.306 14:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.306 14:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:10.306 null1 00:26:10.306 14:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.306 14:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:26:10.306 14:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.306 14:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:10.306 14:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.306 14:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=98206 00:26:10.306 14:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 98206 /tmp/host.sock 00:26:10.306 14:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 98206 ']' 00:26:10.306 14:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:26:10.306 14:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:10.306 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:10.306 14:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:10.306 14:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:10.306 14:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:10.306 14:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:26:10.306 [2024-11-06 14:30:37.866738] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:26:10.306 [2024-11-06 14:30:37.866843] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98206 ] 00:26:10.564 [2024-11-06 14:30:38.021257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:10.564 [2024-11-06 14:30:38.115865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:11.131 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:11.131 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:26:11.131 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:11.131 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:26:11.131 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.131 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:11.131 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.131 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:26:11.131 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.131 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:11.131 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.131 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:26:11.131 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:26:11.131 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:11.131 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.131 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:11.131 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:11.131 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:11.131 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:11.131 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.131 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:26:11.131 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:26:11.131 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:11.131 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:11.131 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.131 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:11.131 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:11.131 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:11.390 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.390 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:26:11.390 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:26:11.390 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.390 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:11.390 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.390 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:26:11.390 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:11.390 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:11.390 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.390 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:11.390 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:11.390 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:11.390 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.390 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:26:11.390 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:26:11.390 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:11.390 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.390 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:11.390 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:11.390 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:11.390 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:11.390 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.390 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:26:11.390 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:26:11.390 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.390 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:11.390 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.390 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:26:11.390 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:11.390 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:11.390 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.390 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:11.390 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:11.390 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:11.390 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.390 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:26:11.390 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:26:11.390 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:11.390 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:11.390 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.390 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:11.390 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:11.390 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:11.390 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.390 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:26:11.390 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:26:11.390 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.390 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:11.390 [2024-11-06 14:30:38.954224] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:11.390 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.390 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:26:11.390 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:11.390 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:11.390 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:11.390 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.391 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:11.391 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:11.391 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.391 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:26:11.391 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:26:11.391 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:11.391 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.391 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:11.391 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:11.391 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:11.391 14:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:11.391 14:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.649 14:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:26:11.649 14:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:26:11.649 14:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:11.649 14:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:11.649 14:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:11.649 14:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:11.649 14:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:11.649 14:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:11.649 14:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:26:11.649 14:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:11.649 14:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:11.649 14:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.649 14:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:11.649 14:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.649 14:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:11.649 14:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:26:11.649 14:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:26:11.649 14:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:11.649 14:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:26:11.649 14:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.649 14:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:11.649 14:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.649 14:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:11.649 14:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:11.649 14:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:11.649 14:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:11.649 14:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:11.649 14:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:26:11.649 14:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:11.649 14:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:11.649 14:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.649 14:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:11.649 14:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:11.649 14:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:11.649 14:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.649 14:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == \n\v\m\e\0 ]] 00:26:11.649 14:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:26:12.215 [2024-11-06 14:30:39.721778] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:26:12.215 [2024-11-06 14:30:39.721818] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:26:12.215 [2024-11-06 14:30:39.721849] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:26:12.215 [2024-11-06 14:30:39.808326] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:26:12.473 [2024-11-06 14:30:39.870823] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:26:12.473 [2024-11-06 14:30:39.871812] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x61500002b280:1 started. 00:26:12.473 [2024-11-06 14:30:39.873706] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:26:12.473 [2024-11-06 14:30:39.873734] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:26:12.473 [2024-11-06 14:30:39.880467] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x61500002b280 was disconnected and freed. delete nvme_qpair. 00:26:12.473 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:12.473 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:12.473 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:26:12.473 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:12.473 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:12.473 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.473 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:12.473 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:12.473 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:12.734 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.734 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.734 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:12.734 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:12.734 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:12.734 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:12.734 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:12.734 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:26:12.734 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:26:12.734 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:12.734 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:12.734 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:12.734 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:12.734 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.734 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:12.734 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.734 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:26:12.734 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:12.734 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:12.734 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:12.734 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:12.734 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:12.734 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:26:12.734 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:26:12.734 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:12.734 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:12.734 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:12.734 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.734 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:12.734 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:12.734 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.734 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0 ]] 00:26:12.734 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:12.734 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:26:12.734 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:12.734 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:12.735 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:12.735 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:12.735 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:12.735 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:12.735 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:26:12.735 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:12.735 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.735 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:12.735 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:12.735 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.735 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:12.735 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:26:12.735 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:26:12.735 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:12.735 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:26:12.735 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.735 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:12.735 [2024-11-06 14:30:40.252529] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x61500002b500:1 started. 00:26:12.735 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.735 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:12.735 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:12.735 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:12.735 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:12.735 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:12.735 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:26:12.735 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:12.735 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:12.735 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:12.735 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:12.735 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.735 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:12.735 [2024-11-06 14:30:40.260529] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x61500002b500 was disconnected and freed. delete nvme_qpair. 00:26:12.735 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.735 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:12.735 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:12.735 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:26:12.735 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:12.735 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:12.735 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:12.735 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:12.735 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:12.735 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:12.735 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:26:12.735 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:26:12.735 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.735 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:12.735 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:12.735 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.735 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:12.735 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:12.735 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:26:12.735 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:12.735 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:26:12.735 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.735 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:12.735 [2024-11-06 14:30:40.326735] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:26:12.735 [2024-11-06 14:30:40.327395] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:26:12.735 [2024-11-06 14:30:40.327432] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:26:12.735 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.735 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:12.735 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:12.735 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:12.735 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:12.735 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:12.735 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:26:12.735 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:12.735 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:12.735 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:12.735 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.735 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:12.735 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:12.735 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.996 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.996 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:12.996 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:12.996 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:12.996 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:12.996 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:12.996 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:12.996 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:26:12.996 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:12.996 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.996 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:12.996 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:12.996 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:12.996 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:12.996 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.996 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:12.996 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:12.996 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:12.996 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:12.996 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:12.996 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:12.996 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:12.996 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:26:12.996 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:12.996 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:12.996 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.996 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:12.996 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:12.996 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:12.996 [2024-11-06 14:30:40.413493] bdev_nvme.c:7308:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:26:12.996 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.996 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:26:12.996 14:30:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:26:12.996 [2024-11-06 14:30:40.474960] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:26:12.996 [2024-11-06 14:30:40.475024] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:26:12.996 [2024-11-06 14:30:40.475036] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:26:12.996 [2024-11-06 14:30:40.475046] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:26:13.933 14:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:13.933 14:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:13.933 14:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:26:13.933 14:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:13.933 14:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.933 14:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:13.933 14:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:13.933 14:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:13.933 14:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:13.933 14:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.933 14:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:26:13.933 14:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:13.933 14:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:26:13.933 14:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:13.933 14:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:13.933 14:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:13.933 14:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:13.933 14:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:13.933 14:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:13.933 14:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:26:13.933 14:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:13.934 14:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:13.934 14:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.934 14:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:13.934 14:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.934 14:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:13.934 14:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:13.934 14:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:26:13.934 14:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:13.934 14:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:26:13.934 14:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.934 14:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:13.934 [2024-11-06 14:30:41.511703] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:26:13.934 [2024-11-06 14:30:41.511737] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:26:13.934 14:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.934 14:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:13.934 14:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:13.934 14:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:13.934 14:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:13.934 14:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:13.934 14:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:26:13.934 14:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:13.934 14:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.934 14:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:13.934 14:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:13.934 14:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:13.934 14:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:13.934 [2024-11-06 14:30:41.521158] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:13.934 [2024-11-06 14:30:41.521192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.934 [2024-11-06 14:30:41.521207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:13.934 [2024-11-06 14:30:41.521214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.934 [2024-11-06 14:30:41.521222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:13.934 [2024-11-06 14:30:41.521229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.934 [2024-11-06 14:30:41.521238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:13.934 [2024-11-06 14:30:41.521245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.934 [2024-11-06 14:30:41.521252] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ad80 is same with the state(6) to be set 00:26:13.934 [2024-11-06 14:30:41.531120] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:26:13.934 14:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.934 [2024-11-06 14:30:41.541134] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:13.934 [2024-11-06 14:30:41.541167] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:13.934 [2024-11-06 14:30:41.541173] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:13.934 [2024-11-06 14:30:41.541179] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:13.934 [2024-11-06 14:30:41.541210] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:13.934 [2024-11-06 14:30:41.541275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.934 [2024-11-06 14:30:41.541290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ad80 with addr=10.0.0.3, port=4420 00:26:13.934 [2024-11-06 14:30:41.541298] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ad80 is same with the state(6) to be set 00:26:13.934 [2024-11-06 14:30:41.541312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:26:13.934 [2024-11-06 14:30:41.541323] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:13.934 [2024-11-06 14:30:41.541331] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:13.934 [2024-11-06 14:30:41.541339] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:13.934 [2024-11-06 14:30:41.541346] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:13.934 [2024-11-06 14:30:41.541352] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:13.934 [2024-11-06 14:30:41.541364] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:13.934 14:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.934 14:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:13.934 14:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:13.934 14:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:13.934 14:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:13.934 14:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:13.934 [2024-11-06 14:30:41.551219] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:13.934 [2024-11-06 14:30:41.551239] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:13.934 [2024-11-06 14:30:41.551244] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:13.934 [2024-11-06 14:30:41.551249] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:13.934 14:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:13.934 [2024-11-06 14:30:41.551269] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:13.934 [2024-11-06 14:30:41.551308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.934 [2024-11-06 14:30:41.551320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ad80 with addr=10.0.0.3, port=4420 00:26:13.934 [2024-11-06 14:30:41.551328] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ad80 is same with the state(6) to be set 00:26:13.934 [2024-11-06 14:30:41.551339] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:26:13.934 [2024-11-06 14:30:41.551349] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:13.934 [2024-11-06 14:30:41.551355] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:13.934 [2024-11-06 14:30:41.551362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:13.934 [2024-11-06 14:30:41.551367] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:13.934 [2024-11-06 14:30:41.551372] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:13.934 [2024-11-06 14:30:41.551376] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:13.934 14:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:26:13.934 14:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:13.934 14:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:13.934 14:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.934 14:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:13.934 14:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:13.934 14:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:13.934 [2024-11-06 14:30:41.561278] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:13.934 [2024-11-06 14:30:41.561302] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:13.934 [2024-11-06 14:30:41.561307] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:13.934 [2024-11-06 14:30:41.561311] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:13.934 [2024-11-06 14:30:41.561336] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:13.934 [2024-11-06 14:30:41.561377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.934 [2024-11-06 14:30:41.561389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ad80 with addr=10.0.0.3, port=4420 00:26:13.934 [2024-11-06 14:30:41.561396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ad80 is same with the state(6) to be set 00:26:13.934 [2024-11-06 14:30:41.561408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:26:13.934 [2024-11-06 14:30:41.561418] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:13.934 [2024-11-06 14:30:41.561424] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:13.934 [2024-11-06 14:30:41.561430] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:13.934 [2024-11-06 14:30:41.561436] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:13.934 [2024-11-06 14:30:41.561441] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:13.935 [2024-11-06 14:30:41.561445] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:14.193 [2024-11-06 14:30:41.571346] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:14.193 [2024-11-06 14:30:41.571368] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:14.193 [2024-11-06 14:30:41.571373] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:14.193 [2024-11-06 14:30:41.571378] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:14.193 [2024-11-06 14:30:41.571397] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:14.193 [2024-11-06 14:30:41.571434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.193 [2024-11-06 14:30:41.571445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ad80 with addr=10.0.0.3, port=4420 00:26:14.193 [2024-11-06 14:30:41.571452] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ad80 is same with the state(6) to be set 00:26:14.193 [2024-11-06 14:30:41.571464] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:26:14.193 [2024-11-06 14:30:41.571473] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:14.193 [2024-11-06 14:30:41.571479] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:14.193 [2024-11-06 14:30:41.571485] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:14.193 [2024-11-06 14:30:41.571492] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:14.193 [2024-11-06 14:30:41.571496] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:14.193 [2024-11-06 14:30:41.571501] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:14.193 [2024-11-06 14:30:41.581405] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:14.193 [2024-11-06 14:30:41.581428] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:14.193 [2024-11-06 14:30:41.581433] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:14.193 [2024-11-06 14:30:41.581437] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:14.193 [2024-11-06 14:30:41.581455] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:14.193 [2024-11-06 14:30:41.581496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.193 [2024-11-06 14:30:41.581508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ad80 with addr=10.0.0.3, port=4420 00:26:14.194 [2024-11-06 14:30:41.581515] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ad80 is same with the state(6) to be set 00:26:14.194 [2024-11-06 14:30:41.581526] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:26:14.194 [2024-11-06 14:30:41.581536] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:14.194 [2024-11-06 14:30:41.581542] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:14.194 [2024-11-06 14:30:41.581549] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:14.194 [2024-11-06 14:30:41.581554] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:14.194 [2024-11-06 14:30:41.581558] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:14.194 [2024-11-06 14:30:41.581562] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:14.194 14:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.194 14:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:14.194 14:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:14.194 14:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:14.194 14:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:14.194 14:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:14.194 14:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:14.194 14:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:26:14.194 14:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:26:14.194 14:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:14.194 14:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:14.194 14:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.194 14:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:14.194 14:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:14.194 14:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:14.194 [2024-11-06 14:30:41.591465] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:14.194 [2024-11-06 14:30:41.591487] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:14.194 [2024-11-06 14:30:41.591492] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:14.194 [2024-11-06 14:30:41.591496] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:14.194 [2024-11-06 14:30:41.591515] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:14.194 [2024-11-06 14:30:41.591552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.194 [2024-11-06 14:30:41.591563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ad80 with addr=10.0.0.3, port=4420 00:26:14.194 [2024-11-06 14:30:41.591571] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ad80 is same with the state(6) to be set 00:26:14.194 [2024-11-06 14:30:41.591582] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:26:14.194 [2024-11-06 14:30:41.591592] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:14.194 [2024-11-06 14:30:41.591598] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:14.194 [2024-11-06 14:30:41.591604] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:14.194 [2024-11-06 14:30:41.591610] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:14.194 [2024-11-06 14:30:41.591614] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:14.194 [2024-11-06 14:30:41.591618] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:14.194 [2024-11-06 14:30:41.597780] bdev_nvme.c:7171:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:26:14.194 [2024-11-06 14:30:41.597810] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:26:14.194 14:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.194 14:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:26:14.194 14:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:26:15.130 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:15.130 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:26:15.130 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:26:15.130 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:15.130 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:15.130 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.130 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:15.130 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:15.130 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:15.130 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.130 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4421 == \4\4\2\1 ]] 00:26:15.130 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:15.130 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:26:15.130 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:15.130 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:15.130 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:15.130 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:15.130 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:15.130 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:15.130 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:26:15.130 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:15.130 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.130 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:15.130 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:15.130 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.130 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:15.130 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:15.130 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:26:15.130 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:15.130 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:26:15.130 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.130 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:15.130 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.130 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:26:15.130 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:26:15.130 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:15.130 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:15.130 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:26:15.130 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:26:15.130 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:15.130 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.130 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:15.130 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:15.130 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:15.130 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:15.130 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.130 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:26:15.130 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:15.130 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:26:15.130 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:26:15.130 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:15.130 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:15.130 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:26:15.130 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:26:15.130 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:15.130 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.130 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:15.130 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:15.130 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:15.130 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:15.130 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.389 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:26:15.389 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:15.389 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:26:15.389 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:26:15.389 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:15.389 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:15.389 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:15.389 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:15.389 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:15.389 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:26:15.389 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:15.389 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.389 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:15.389 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:15.389 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.389 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:26:15.389 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:26:15.389 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:26:15.389 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:15.389 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:15.389 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.389 14:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:16.323 [2024-11-06 14:30:43.832066] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:26:16.323 [2024-11-06 14:30:43.832096] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:26:16.323 [2024-11-06 14:30:43.832122] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:26:16.323 [2024-11-06 14:30:43.918180] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:26:16.582 [2024-11-06 14:30:43.976526] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:26:16.582 [2024-11-06 14:30:43.977204] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x61500002c680:1 started. 00:26:16.582 [2024-11-06 14:30:43.978989] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:26:16.582 [2024-11-06 14:30:43.979031] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:26:16.582 14:30:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.582 14:30:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:16.582 [2024-11-06 14:30:43.981097] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x61500002c680 was disconnected and freed. delete nvme_qpair. 00:26:16.582 14:30:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:26:16.582 14:30:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:16.582 14:30:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:16.582 14:30:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:16.582 14:30:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:16.582 14:30:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:16.582 14:30:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:16.582 14:30:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.582 14:30:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:16.582 2024/11/06 14:30:43 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.3 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:26:16.582 request: 00:26:16.582 { 00:26:16.582 "method": "bdev_nvme_start_discovery", 00:26:16.582 "params": { 00:26:16.582 "name": "nvme", 00:26:16.582 "trtype": "tcp", 00:26:16.582 "traddr": "10.0.0.3", 00:26:16.582 "adrfam": "ipv4", 00:26:16.582 "trsvcid": "8009", 00:26:16.582 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:16.582 "wait_for_attach": true 00:26:16.582 } 00:26:16.582 } 00:26:16.582 Got JSON-RPC error response 00:26:16.582 GoRPCClient: error on JSON-RPC call 00:26:16.582 14:30:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:16.582 14:30:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:26:16.582 14:30:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:16.582 14:30:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:16.582 14:30:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:16.582 14:30:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:26:16.582 14:30:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:16.582 14:30:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.582 14:30:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:16.582 14:30:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:16.582 14:30:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:16.582 14:30:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:16.582 14:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.582 14:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:26:16.582 14:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:26:16.582 14:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:16.582 14:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:16.582 14:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.582 14:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:16.582 14:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:16.582 14:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:16.582 14:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.582 14:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:16.582 14:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:16.582 14:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:26:16.582 14:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:16.582 14:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:16.582 14:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:16.582 14:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:16.582 14:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:16.582 14:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:16.582 14:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.582 14:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:16.582 2024/11/06 14:30:44 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.3 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:26:16.582 request: 00:26:16.582 { 00:26:16.582 "method": "bdev_nvme_start_discovery", 00:26:16.582 "params": { 00:26:16.582 "name": "nvme_second", 00:26:16.582 "trtype": "tcp", 00:26:16.582 "traddr": "10.0.0.3", 00:26:16.582 "adrfam": "ipv4", 00:26:16.582 "trsvcid": "8009", 00:26:16.582 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:16.582 "wait_for_attach": true 00:26:16.582 } 00:26:16.582 } 00:26:16.582 Got JSON-RPC error response 00:26:16.582 GoRPCClient: error on JSON-RPC call 00:26:16.582 14:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:16.582 14:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:26:16.582 14:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:16.582 14:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:16.582 14:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:16.582 14:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:26:16.582 14:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:16.582 14:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:16.582 14:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:16.582 14:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.582 14:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:16.582 14:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:16.582 14:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.583 14:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:26:16.583 14:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:26:16.583 14:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:16.583 14:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:16.583 14:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:16.583 14:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:16.583 14:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.583 14:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:16.583 14:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.583 14:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:16.583 14:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:16.583 14:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:26:16.583 14:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:16.583 14:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:16.583 14:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:16.583 14:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:16.583 14:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:16.583 14:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:16.583 14:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.583 14:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:17.524 [2024-11-06 14:30:45.151516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.524 [2024-11-06 14:30:45.151567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002c900 with addr=10.0.0.3, port=8010 00:26:17.524 [2024-11-06 14:30:45.151604] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:17.524 [2024-11-06 14:30:45.151613] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:17.524 [2024-11-06 14:30:45.151621] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:26:18.895 [2024-11-06 14:30:46.151543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.895 [2024-11-06 14:30:46.151594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002cb80 with addr=10.0.0.3, port=8010 00:26:18.895 [2024-11-06 14:30:46.151630] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:18.895 [2024-11-06 14:30:46.151639] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:18.895 [2024-11-06 14:30:46.151647] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:26:19.829 [2024-11-06 14:30:47.151380] bdev_nvme.c:7427:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:26:19.830 2024/11/06 14:30:47 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.3 trsvcid:8010 trtype:tcp wait_for_attach:%!s(bool=false)], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:26:19.830 request: 00:26:19.830 { 00:26:19.830 "method": "bdev_nvme_start_discovery", 00:26:19.830 "params": { 00:26:19.830 "name": "nvme_second", 00:26:19.830 "trtype": "tcp", 00:26:19.830 "traddr": "10.0.0.3", 00:26:19.830 "adrfam": "ipv4", 00:26:19.830 "trsvcid": "8010", 00:26:19.830 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:19.830 "wait_for_attach": false, 00:26:19.830 "attach_timeout_ms": 3000 00:26:19.830 } 00:26:19.830 } 00:26:19.830 Got JSON-RPC error response 00:26:19.830 GoRPCClient: error on JSON-RPC call 00:26:19.830 14:30:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:19.830 14:30:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:26:19.830 14:30:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:19.830 14:30:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:19.830 14:30:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:19.830 14:30:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:26:19.830 14:30:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:19.830 14:30:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:19.830 14:30:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.830 14:30:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:19.830 14:30:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:19.830 14:30:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:19.830 14:30:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.830 14:30:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:26:19.830 14:30:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:26:19.830 14:30:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 98206 00:26:19.830 14:30:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:26:19.830 14:30:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:19.830 14:30:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:26:19.830 14:30:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:19.830 14:30:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:26:19.830 14:30:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:19.830 14:30:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:19.830 rmmod nvme_tcp 00:26:19.830 rmmod nvme_fabrics 00:26:19.830 rmmod nvme_keyring 00:26:19.830 14:30:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:19.830 14:30:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:26:19.830 14:30:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:26:19.830 14:30:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 98156 ']' 00:26:19.830 14:30:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 98156 00:26:19.830 14:30:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@952 -- # '[' -z 98156 ']' 00:26:19.830 14:30:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # kill -0 98156 00:26:19.830 14:30:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # uname 00:26:19.830 14:30:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:19.830 14:30:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 98156 00:26:19.830 killing process with pid 98156 00:26:19.830 14:30:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:19.830 14:30:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:19.830 14:30:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 98156' 00:26:19.830 14:30:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@971 -- # kill 98156 00:26:19.830 14:30:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@976 -- # wait 98156 00:26:20.395 14:30:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:20.395 14:30:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:20.395 14:30:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:20.395 14:30:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:26:20.395 14:30:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:26:20.395 14:30:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:20.395 14:30:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:26:20.395 14:30:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:20.395 14:30:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:20.395 14:30:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:20.395 14:30:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:20.395 14:30:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:20.395 14:30:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:20.395 14:30:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:20.395 14:30:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:20.395 14:30:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:20.395 14:30:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:20.395 14:30:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:20.395 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:20.653 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:20.653 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:20.653 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:20.653 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:20.653 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:20.653 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:20.653 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:20.653 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:26:20.653 00:26:20.653 real 0m11.791s 00:26:20.653 user 0m22.206s 00:26:20.653 sys 0m1.485s 00:26:20.653 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:20.653 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:20.653 ************************************ 00:26:20.653 END TEST nvmf_host_discovery 00:26:20.654 ************************************ 00:26:20.654 14:30:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:20.654 14:30:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:20.654 14:30:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:20.654 14:30:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.654 ************************************ 00:26:20.654 START TEST nvmf_host_multipath_status 00:26:20.654 ************************************ 00:26:20.654 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:20.654 * Looking for test storage... 00:26:20.654 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:20.654 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:20.654 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lcov --version 00:26:20.654 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:20.654 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:20.654 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:20.654 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:20.654 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:20.654 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:26:20.654 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:26:20.654 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:26:20.654 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:26:20.654 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:26:20.654 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:26:20.654 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:26:20.654 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:20.654 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:26:20.654 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:26:20.654 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:20.654 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:20.654 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:26:20.654 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:26:20.654 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:20.654 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:26:20.654 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:26:20.654 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:26:20.654 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:26:20.654 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:20.654 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:26:20.654 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:26:20.654 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:20.654 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:20.654 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:26:20.654 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:20.654 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:20.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:20.654 --rc genhtml_branch_coverage=1 00:26:20.654 --rc genhtml_function_coverage=1 00:26:20.654 --rc genhtml_legend=1 00:26:20.654 --rc geninfo_all_blocks=1 00:26:20.654 --rc geninfo_unexecuted_blocks=1 00:26:20.654 00:26:20.654 ' 00:26:20.654 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:20.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:20.654 --rc genhtml_branch_coverage=1 00:26:20.654 --rc genhtml_function_coverage=1 00:26:20.654 --rc genhtml_legend=1 00:26:20.654 --rc geninfo_all_blocks=1 00:26:20.654 --rc geninfo_unexecuted_blocks=1 00:26:20.654 00:26:20.654 ' 00:26:20.654 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:20.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:20.654 --rc genhtml_branch_coverage=1 00:26:20.654 --rc genhtml_function_coverage=1 00:26:20.654 --rc genhtml_legend=1 00:26:20.654 --rc geninfo_all_blocks=1 00:26:20.654 --rc geninfo_unexecuted_blocks=1 00:26:20.654 00:26:20.654 ' 00:26:20.654 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:20.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:20.654 --rc genhtml_branch_coverage=1 00:26:20.654 --rc genhtml_function_coverage=1 00:26:20.654 --rc genhtml_legend=1 00:26:20.654 --rc geninfo_all_blocks=1 00:26:20.654 --rc geninfo_unexecuted_blocks=1 00:26:20.654 00:26:20.654 ' 00:26:20.654 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:20.654 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:26:20.913 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:20.913 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:20.913 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:20.913 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:20.913 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:20.913 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:20.913 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:20.913 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:20.913 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:20.913 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:20.913 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:26:20.913 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:26:20.913 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:20.913 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:20.913 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:20.913 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:20.913 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:20.913 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:26:20.913 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:20.913 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:20.913 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:20.913 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.913 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.913 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.913 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:26:20.913 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.913 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:26:20.913 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:20.913 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:20.913 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:20.913 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:20.913 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:20.913 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:20.913 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:20.913 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:20.913 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:20.913 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:20.913 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:20.913 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:20.913 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:20.913 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:26:20.913 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:20.913 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:26:20.913 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:26:20.913 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:20.913 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:20.913 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:20.913 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:20.913 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:20.913 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:20.913 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:20.913 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:20.913 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:26:20.913 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:26:20.913 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:26:20.913 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:26:20.913 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:26:20.913 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@460 -- # nvmf_veth_init 00:26:20.913 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:20.913 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:26:20.913 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:26:20.913 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:20.913 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:20.913 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:26:20.913 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:20.914 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:26:20.914 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:20.914 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:26:20.914 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:20.914 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:20.914 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:20.914 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:20.914 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:20.914 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:20.914 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:26:20.914 Cannot find device "nvmf_init_br" 00:26:20.914 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:26:20.914 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:26:20.914 Cannot find device "nvmf_init_br2" 00:26:20.914 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:26:20.914 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:26:20.914 Cannot find device "nvmf_tgt_br" 00:26:20.914 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:26:20.914 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:26:20.914 Cannot find device "nvmf_tgt_br2" 00:26:20.914 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:26:20.914 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:26:20.914 Cannot find device "nvmf_init_br" 00:26:20.914 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:26:20.914 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:26:20.914 Cannot find device "nvmf_init_br2" 00:26:20.914 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:26:20.914 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:26:20.914 Cannot find device "nvmf_tgt_br" 00:26:20.914 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:26:20.914 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:26:20.914 Cannot find device "nvmf_tgt_br2" 00:26:20.914 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:26:20.914 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:26:20.914 Cannot find device "nvmf_br" 00:26:20.914 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:26:20.914 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:26:20.914 Cannot find device "nvmf_init_if" 00:26:20.914 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:26:20.914 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:26:20.914 Cannot find device "nvmf_init_if2" 00:26:20.914 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:26:20.914 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:20.914 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:20.914 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:26:20.914 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:20.914 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:20.914 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:26:20.914 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:26:20.914 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:20.914 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:26:20.914 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:20.914 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:20.914 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:20.914 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:20.914 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:20.914 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:26:20.914 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:26:20.914 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:26:20.914 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:26:20.914 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:26:20.914 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:26:20.914 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:26:20.914 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:26:20.914 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:26:20.914 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:20.914 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:20.914 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:20.914 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:26:20.914 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:26:20.914 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:26:20.914 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:26:20.914 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:20.914 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:20.914 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:21.172 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:26:21.172 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:26:21.172 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:26:21.173 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:21.173 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:21.173 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:26:21.173 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:21.173 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.095 ms 00:26:21.173 00:26:21.173 --- 10.0.0.3 ping statistics --- 00:26:21.173 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:21.173 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:26:21.173 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:26:21.173 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:21.173 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:26:21.173 00:26:21.173 --- 10.0.0.4 ping statistics --- 00:26:21.173 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:21.173 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:26:21.173 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:21.173 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:21.173 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:26:21.173 00:26:21.173 --- 10.0.0.1 ping statistics --- 00:26:21.173 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:21.173 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:26:21.173 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:26:21.173 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:21.173 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:26:21.173 00:26:21.173 --- 10.0.0.2 ping statistics --- 00:26:21.173 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:21.173 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:26:21.173 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:21.173 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@461 -- # return 0 00:26:21.173 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:21.173 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:21.173 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:21.173 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:21.173 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:21.173 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:21.173 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:21.173 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:26:21.173 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:21.173 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:21.173 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:21.173 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=98757 00:26:21.173 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 98757 00:26:21.173 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 98757 ']' 00:26:21.173 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:21.173 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:21.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:21.173 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:21.173 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:21.173 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:21.173 14:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:26:21.173 [2024-11-06 14:30:48.664415] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:26:21.173 [2024-11-06 14:30:48.664523] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:21.431 [2024-11-06 14:30:48.823121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:21.431 [2024-11-06 14:30:48.920782] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:21.431 [2024-11-06 14:30:48.920837] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:21.431 [2024-11-06 14:30:48.920851] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:21.431 [2024-11-06 14:30:48.920863] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:21.431 [2024-11-06 14:30:48.920872] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:21.431 [2024-11-06 14:30:48.922281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:21.431 [2024-11-06 14:30:48.922365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:21.998 14:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:21.998 14:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:26:21.998 14:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:21.998 14:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:21.998 14:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:21.998 14:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:21.998 14:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=98757 00:26:21.998 14:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:22.287 [2024-11-06 14:30:49.720587] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:22.287 14:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:22.546 Malloc0 00:26:22.546 14:30:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:26:22.804 14:30:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:22.804 14:30:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:23.062 [2024-11-06 14:30:50.594076] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:23.062 14:30:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:26:23.320 [2024-11-06 14:30:50.802259] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:26:23.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:23.320 14:30:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=98855 00:26:23.320 14:30:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:23.320 14:30:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 98855 /var/tmp/bdevperf.sock 00:26:23.320 14:30:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 98855 ']' 00:26:23.320 14:30:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:23.320 14:30:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:26:23.320 14:30:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:23.320 14:30:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:23.320 14:30:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:23.320 14:30:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:24.253 14:30:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:24.253 14:30:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:26:24.253 14:30:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:24.512 14:30:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:24.769 Nvme0n1 00:26:24.769 14:30:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:25.027 Nvme0n1 00:26:25.028 14:30:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:26:25.028 14:30:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:26:27.560 14:30:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:26:27.560 14:30:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:26:27.560 14:30:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:26:27.560 14:30:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:26:28.494 14:30:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:26:28.494 14:30:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:28.494 14:30:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:28.494 14:30:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:28.753 14:30:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:28.753 14:30:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:28.753 14:30:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:28.753 14:30:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:29.011 14:30:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:29.011 14:30:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:29.011 14:30:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:29.012 14:30:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:29.270 14:30:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:29.270 14:30:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:29.270 14:30:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:29.270 14:30:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:29.529 14:30:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:29.529 14:30:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:29.529 14:30:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:29.529 14:30:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:29.529 14:30:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:29.529 14:30:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:29.529 14:30:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:29.529 14:30:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:29.787 14:30:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:29.787 14:30:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:26:29.787 14:30:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:26:30.045 14:30:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:26:30.304 14:30:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:26:31.239 14:30:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:26:31.239 14:30:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:31.239 14:30:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:31.239 14:30:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:31.498 14:30:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:31.498 14:30:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:31.498 14:30:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:31.498 14:30:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:31.756 14:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:31.756 14:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:31.756 14:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:31.756 14:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:31.756 14:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:31.756 14:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:31.756 14:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:31.756 14:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:32.014 14:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:32.014 14:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:32.014 14:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:32.014 14:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:32.272 14:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:32.272 14:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:32.272 14:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:32.272 14:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:32.530 14:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:32.530 14:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:26:32.530 14:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:26:32.789 14:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:26:32.789 14:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:26:34.164 14:31:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:26:34.164 14:31:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:34.164 14:31:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:34.164 14:31:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:34.164 14:31:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:34.164 14:31:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:34.164 14:31:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:34.164 14:31:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:34.422 14:31:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:34.422 14:31:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:34.422 14:31:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:34.422 14:31:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:34.422 14:31:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:34.422 14:31:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:34.423 14:31:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:34.423 14:31:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:34.681 14:31:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:34.681 14:31:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:34.681 14:31:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:34.681 14:31:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:34.939 14:31:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:34.939 14:31:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:34.939 14:31:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:34.939 14:31:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:35.197 14:31:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:35.197 14:31:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:26:35.197 14:31:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:26:35.454 14:31:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:26:35.454 14:31:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:26:36.828 14:31:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:26:36.828 14:31:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:36.828 14:31:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:36.828 14:31:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:36.828 14:31:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:36.828 14:31:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:36.828 14:31:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:36.828 14:31:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:37.086 14:31:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:37.086 14:31:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:37.086 14:31:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:37.086 14:31:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:37.086 14:31:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:37.086 14:31:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:37.086 14:31:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:37.086 14:31:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:37.343 14:31:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:37.343 14:31:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:37.343 14:31:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:37.343 14:31:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:37.604 14:31:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:37.604 14:31:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:37.604 14:31:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:37.604 14:31:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:37.866 14:31:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:37.866 14:31:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:26:37.866 14:31:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:26:38.124 14:31:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:26:38.124 14:31:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:26:39.500 14:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:26:39.500 14:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:39.500 14:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:39.500 14:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:39.500 14:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:39.500 14:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:39.500 14:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:39.500 14:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:39.500 14:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:39.500 14:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:39.500 14:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:39.500 14:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:39.758 14:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:39.758 14:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:39.758 14:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:39.758 14:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:40.016 14:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:40.016 14:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:40.016 14:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:40.016 14:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:40.274 14:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:40.274 14:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:40.274 14:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:40.274 14:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:40.274 14:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:40.274 14:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:26:40.274 14:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:26:40.532 14:31:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:26:40.789 14:31:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:26:41.721 14:31:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:26:41.721 14:31:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:41.721 14:31:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:41.721 14:31:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:41.978 14:31:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:41.978 14:31:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:41.978 14:31:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:41.978 14:31:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:42.263 14:31:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:42.263 14:31:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:42.263 14:31:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:42.263 14:31:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:42.263 14:31:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:42.263 14:31:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:42.263 14:31:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:42.263 14:31:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:42.520 14:31:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:42.520 14:31:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:42.520 14:31:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:42.520 14:31:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:42.777 14:31:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:42.777 14:31:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:42.777 14:31:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:42.777 14:31:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:43.035 14:31:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:43.035 14:31:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:26:43.293 14:31:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:26:43.293 14:31:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:26:43.551 14:31:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:26:43.551 14:31:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:26:44.923 14:31:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:26:44.923 14:31:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:44.923 14:31:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:44.923 14:31:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:44.923 14:31:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:44.923 14:31:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:44.923 14:31:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:44.923 14:31:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:44.923 14:31:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:44.923 14:31:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:44.923 14:31:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:44.923 14:31:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:45.181 14:31:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:45.181 14:31:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:45.181 14:31:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:45.181 14:31:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:45.440 14:31:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:45.440 14:31:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:45.440 14:31:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:45.440 14:31:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:45.698 14:31:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:45.698 14:31:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:45.698 14:31:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:45.698 14:31:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:45.956 14:31:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:45.956 14:31:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:26:45.956 14:31:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:26:45.956 14:31:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:26:46.214 14:31:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:26:47.149 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:26:47.149 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:47.149 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:47.150 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:47.408 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:47.408 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:47.408 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:47.408 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:47.667 14:31:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:47.667 14:31:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:47.667 14:31:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:47.667 14:31:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:47.926 14:31:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:47.926 14:31:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:47.926 14:31:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:47.926 14:31:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:47.926 14:31:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:47.926 14:31:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:47.926 14:31:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:47.926 14:31:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:48.184 14:31:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:48.184 14:31:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:48.184 14:31:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:48.184 14:31:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:48.442 14:31:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:48.442 14:31:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:26:48.442 14:31:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:26:48.701 14:31:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:26:48.959 14:31:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:26:49.893 14:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:26:49.893 14:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:49.893 14:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:49.893 14:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:49.893 14:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:49.893 14:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:49.893 14:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:49.893 14:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:50.151 14:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:50.151 14:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:50.151 14:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:50.151 14:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:50.409 14:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:50.409 14:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:50.409 14:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:50.409 14:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:50.667 14:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:50.667 14:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:50.667 14:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:50.667 14:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:50.667 14:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:50.667 14:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:50.667 14:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:50.667 14:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:50.926 14:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:50.926 14:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:26:50.926 14:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:26:51.184 14:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:26:51.184 14:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:26:52.566 14:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:26:52.566 14:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:52.566 14:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:52.566 14:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:52.566 14:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:52.566 14:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:52.566 14:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:52.566 14:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:52.824 14:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:52.824 14:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:52.824 14:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:52.824 14:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:52.824 14:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:52.824 14:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:52.824 14:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:52.824 14:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:53.083 14:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:53.083 14:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:53.083 14:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:53.083 14:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:53.341 14:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:53.341 14:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:53.341 14:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:53.341 14:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:53.600 14:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:53.600 14:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 98855 00:26:53.600 14:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 98855 ']' 00:26:53.600 14:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 98855 00:26:53.600 14:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:26:53.600 14:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:53.600 14:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 98855 00:26:53.600 killing process with pid 98855 00:26:53.600 14:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:26:53.600 14:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:26:53.600 14:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 98855' 00:26:53.600 14:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 98855 00:26:53.600 14:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 98855 00:26:53.600 { 00:26:53.600 "results": [ 00:26:53.600 { 00:26:53.600 "job": "Nvme0n1", 00:26:53.600 "core_mask": "0x4", 00:26:53.600 "workload": "verify", 00:26:53.600 "status": "terminated", 00:26:53.600 "verify_range": { 00:26:53.600 "start": 0, 00:26:53.600 "length": 16384 00:26:53.600 }, 00:26:53.600 "queue_depth": 128, 00:26:53.600 "io_size": 4096, 00:26:53.600 "runtime": 28.327161, 00:26:53.600 "iops": 10510.407308377991, 00:26:53.600 "mibps": 41.05627854835153, 00:26:53.600 "io_failed": 0, 00:26:53.600 "io_timeout": 0, 00:26:53.600 "avg_latency_us": 12153.954993977506, 00:26:53.600 "min_latency_us": 143.36, 00:26:53.600 "max_latency_us": 3019898.88 00:26:53.600 } 00:26:53.600 ], 00:26:53.600 "core_count": 1 00:26:53.600 } 00:26:54.170 14:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 98855 00:26:54.170 14:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:26:54.170 [2024-11-06 14:30:50.888425] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:26:54.170 [2024-11-06 14:30:50.888549] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98855 ] 00:26:54.170 [2024-11-06 14:30:51.044692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:54.170 [2024-11-06 14:30:51.147132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:54.170 Running I/O for 90 seconds... 00:26:54.170 10929.00 IOPS, 42.69 MiB/s [2024-11-06T14:31:21.805Z] 11182.00 IOPS, 43.68 MiB/s [2024-11-06T14:31:21.805Z] 11092.00 IOPS, 43.33 MiB/s [2024-11-06T14:31:21.805Z] 11181.00 IOPS, 43.68 MiB/s [2024-11-06T14:31:21.805Z] 11200.00 IOPS, 43.75 MiB/s [2024-11-06T14:31:21.805Z] 11315.50 IOPS, 44.20 MiB/s [2024-11-06T14:31:21.805Z] 11315.00 IOPS, 44.20 MiB/s [2024-11-06T14:31:21.805Z] 11373.62 IOPS, 44.43 MiB/s [2024-11-06T14:31:21.805Z] 11417.11 IOPS, 44.60 MiB/s [2024-11-06T14:31:21.805Z] 11422.00 IOPS, 44.62 MiB/s [2024-11-06T14:31:21.805Z] 11413.27 IOPS, 44.58 MiB/s [2024-11-06T14:31:21.805Z] 11397.92 IOPS, 44.52 MiB/s [2024-11-06T14:31:21.805Z] [2024-11-06 14:31:05.517733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:112536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.170 [2024-11-06 14:31:05.517797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:54.170 [2024-11-06 14:31:05.517856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:112688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.170 [2024-11-06 14:31:05.517874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:54.170 [2024-11-06 14:31:05.517895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:112696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.170 [2024-11-06 14:31:05.517908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:54.170 [2024-11-06 14:31:05.517929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:112704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.170 [2024-11-06 14:31:05.517942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:54.170 [2024-11-06 14:31:05.517974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:112712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.170 [2024-11-06 14:31:05.517986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:54.170 [2024-11-06 14:31:05.518007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:112720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.170 [2024-11-06 14:31:05.518019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:54.170 [2024-11-06 14:31:05.518039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:112728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.170 [2024-11-06 14:31:05.518051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:54.170 [2024-11-06 14:31:05.518072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:112736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.170 [2024-11-06 14:31:05.518084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:54.170 [2024-11-06 14:31:05.518103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:112744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.170 [2024-11-06 14:31:05.518116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:54.170 [2024-11-06 14:31:05.518136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:112752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.170 [2024-11-06 14:31:05.518169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:54.170 [2024-11-06 14:31:05.518190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:112760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.170 [2024-11-06 14:31:05.518203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:54.170 [2024-11-06 14:31:05.518223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:112768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.170 [2024-11-06 14:31:05.518235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:54.170 [2024-11-06 14:31:05.518255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:112776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.170 [2024-11-06 14:31:05.518267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:54.170 [2024-11-06 14:31:05.518287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:112784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.170 [2024-11-06 14:31:05.518299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:54.170 [2024-11-06 14:31:05.518319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:112792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.170 [2024-11-06 14:31:05.518331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:54.170 [2024-11-06 14:31:05.518352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:112800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.170 [2024-11-06 14:31:05.518364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:54.171 [2024-11-06 14:31:05.518383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:112808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.171 [2024-11-06 14:31:05.518396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:54.171 [2024-11-06 14:31:05.518418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:112816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.171 [2024-11-06 14:31:05.518431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:54.171 [2024-11-06 14:31:05.518451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:112824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.171 [2024-11-06 14:31:05.518466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:54.171 [2024-11-06 14:31:05.518486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:112832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.171 [2024-11-06 14:31:05.518498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:54.171 [2024-11-06 14:31:05.518518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:112840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.171 [2024-11-06 14:31:05.518530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:54.171 [2024-11-06 14:31:05.518551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:112848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.171 [2024-11-06 14:31:05.518571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:54.171 [2024-11-06 14:31:05.518592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:112856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.171 [2024-11-06 14:31:05.518605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:54.171 [2024-11-06 14:31:05.518625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:112864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.171 [2024-11-06 14:31:05.518638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:54.171 [2024-11-06 14:31:05.519091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:112872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.171 [2024-11-06 14:31:05.519116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:54.171 [2024-11-06 14:31:05.519143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:112880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.171 [2024-11-06 14:31:05.519156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:54.171 [2024-11-06 14:31:05.519178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:112888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.171 [2024-11-06 14:31:05.519191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:54.171 [2024-11-06 14:31:05.519213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:112896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.171 [2024-11-06 14:31:05.519226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:54.171 [2024-11-06 14:31:05.519248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:112904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.171 [2024-11-06 14:31:05.519260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:54.171 [2024-11-06 14:31:05.519282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:112912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.171 [2024-11-06 14:31:05.519295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:54.171 [2024-11-06 14:31:05.519317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:112920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.171 [2024-11-06 14:31:05.519330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:54.171 [2024-11-06 14:31:05.519352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:112928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.171 [2024-11-06 14:31:05.519372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:54.171 [2024-11-06 14:31:05.520413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:112936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.171 [2024-11-06 14:31:05.520434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:54.171 [2024-11-06 14:31:05.520459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:112944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.171 [2024-11-06 14:31:05.520472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:54.171 [2024-11-06 14:31:05.520506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:112952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.171 [2024-11-06 14:31:05.520519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:54.171 [2024-11-06 14:31:05.520543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:112960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.171 [2024-11-06 14:31:05.520556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:54.171 [2024-11-06 14:31:05.520580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:112968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.171 [2024-11-06 14:31:05.520593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:54.171 [2024-11-06 14:31:05.520617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:112976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.171 [2024-11-06 14:31:05.520629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:54.171 [2024-11-06 14:31:05.520653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:112984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.171 [2024-11-06 14:31:05.520666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:54.171 [2024-11-06 14:31:05.520690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:112992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.171 [2024-11-06 14:31:05.520703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:54.171 [2024-11-06 14:31:05.520727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:113000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.171 [2024-11-06 14:31:05.520740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:54.171 [2024-11-06 14:31:05.520764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:113008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.171 [2024-11-06 14:31:05.520776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:54.171 [2024-11-06 14:31:05.520800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:113016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.171 [2024-11-06 14:31:05.520812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:54.171 [2024-11-06 14:31:05.520836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:113024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.171 [2024-11-06 14:31:05.520848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:54.171 [2024-11-06 14:31:05.520872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:113032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.171 [2024-11-06 14:31:05.520884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:54.171 [2024-11-06 14:31:05.520908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:113040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.171 [2024-11-06 14:31:05.520920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:54.171 [2024-11-06 14:31:05.520962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:113048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.171 [2024-11-06 14:31:05.520976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:54.171 [2024-11-06 14:31:05.521000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:113056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.171 [2024-11-06 14:31:05.521013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:54.171 [2024-11-06 14:31:05.521038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:112544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.171 [2024-11-06 14:31:05.521051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:54.171 [2024-11-06 14:31:05.521075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:112552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.171 [2024-11-06 14:31:05.521088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:54.171 [2024-11-06 14:31:05.521111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:112560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.171 [2024-11-06 14:31:05.521124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:54.171 [2024-11-06 14:31:05.521148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:112568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.171 [2024-11-06 14:31:05.521160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:54.171 [2024-11-06 14:31:05.521185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:112576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.171 [2024-11-06 14:31:05.521197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:54.171 [2024-11-06 14:31:05.521221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:112584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.171 [2024-11-06 14:31:05.521233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:54.171 [2024-11-06 14:31:05.521257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:112592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.171 [2024-11-06 14:31:05.521269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:54.172 [2024-11-06 14:31:05.521293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:112600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.172 [2024-11-06 14:31:05.521306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:54.172 [2024-11-06 14:31:05.521329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:112608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.172 [2024-11-06 14:31:05.521342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:54.172 [2024-11-06 14:31:05.521366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:112616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.172 [2024-11-06 14:31:05.521378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.172 [2024-11-06 14:31:05.521410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:112624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.172 [2024-11-06 14:31:05.521423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:54.172 [2024-11-06 14:31:05.521447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:112632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.172 [2024-11-06 14:31:05.521459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:54.172 [2024-11-06 14:31:05.521483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:112640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.172 [2024-11-06 14:31:05.521495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:54.172 [2024-11-06 14:31:05.521519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:112648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.172 [2024-11-06 14:31:05.521531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:54.172 [2024-11-06 14:31:05.521556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:112656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.172 [2024-11-06 14:31:05.521568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:54.172 [2024-11-06 14:31:05.521592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:112664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.172 [2024-11-06 14:31:05.521605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:54.172 [2024-11-06 14:31:05.521629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:112672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.172 [2024-11-06 14:31:05.521642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:54.172 [2024-11-06 14:31:05.521786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:112680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.172 [2024-11-06 14:31:05.521803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:54.172 [2024-11-06 14:31:05.521832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:113064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.172 [2024-11-06 14:31:05.521845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:54.172 [2024-11-06 14:31:05.521872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:113072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.172 [2024-11-06 14:31:05.521885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:54.172 [2024-11-06 14:31:05.521912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:113080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.172 [2024-11-06 14:31:05.521925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:54.172 [2024-11-06 14:31:05.521963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:113088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.172 [2024-11-06 14:31:05.521976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:54.172 [2024-11-06 14:31:05.522003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:113096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.172 [2024-11-06 14:31:05.522024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:54.172 [2024-11-06 14:31:05.522052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:113104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.172 [2024-11-06 14:31:05.522065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:54.172 [2024-11-06 14:31:05.522092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:113112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.172 [2024-11-06 14:31:05.522104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:54.172 [2024-11-06 14:31:05.522131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:113120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.172 [2024-11-06 14:31:05.522143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:54.172 [2024-11-06 14:31:05.522170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:113128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.172 [2024-11-06 14:31:05.522182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:54.172 [2024-11-06 14:31:05.522209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:113136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.172 [2024-11-06 14:31:05.522221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:54.172 [2024-11-06 14:31:05.522248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:113144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.172 [2024-11-06 14:31:05.522261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:54.172 [2024-11-06 14:31:05.522287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:113152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.172 [2024-11-06 14:31:05.522300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:54.172 [2024-11-06 14:31:05.522327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:113160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.172 [2024-11-06 14:31:05.522340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:54.172 [2024-11-06 14:31:05.522366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:113168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.172 [2024-11-06 14:31:05.522379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:54.172 [2024-11-06 14:31:05.522405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:113176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.172 [2024-11-06 14:31:05.522417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:54.172 [2024-11-06 14:31:05.522444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:113184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.172 [2024-11-06 14:31:05.522456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:54.172 [2024-11-06 14:31:05.522482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:113192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.172 [2024-11-06 14:31:05.522500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:54.172 [2024-11-06 14:31:05.522526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:113200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.172 [2024-11-06 14:31:05.522539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:54.172 [2024-11-06 14:31:05.522566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:113208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.172 [2024-11-06 14:31:05.522578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:54.172 [2024-11-06 14:31:05.522605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:113216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.172 [2024-11-06 14:31:05.522617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:54.172 [2024-11-06 14:31:05.522644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:113224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.172 [2024-11-06 14:31:05.522657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:54.172 [2024-11-06 14:31:05.522683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:113232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.172 [2024-11-06 14:31:05.522696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:54.172 [2024-11-06 14:31:05.522722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:113240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.172 [2024-11-06 14:31:05.522734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:54.172 [2024-11-06 14:31:05.522761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:113248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.172 [2024-11-06 14:31:05.522773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:54.172 [2024-11-06 14:31:05.522799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:113256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.172 [2024-11-06 14:31:05.522812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:54.172 [2024-11-06 14:31:05.522838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:113264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.172 [2024-11-06 14:31:05.522851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:54.172 [2024-11-06 14:31:05.522877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:113272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.172 [2024-11-06 14:31:05.522890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:54.172 [2024-11-06 14:31:05.522916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:113280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.173 [2024-11-06 14:31:05.522928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:54.173 [2024-11-06 14:31:05.522966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:113288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.173 [2024-11-06 14:31:05.522980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:54.173 [2024-11-06 14:31:05.523011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:113296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.173 [2024-11-06 14:31:05.523031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:54.173 [2024-11-06 14:31:05.523058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:113304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.173 [2024-11-06 14:31:05.523071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:54.173 11164.54 IOPS, 43.61 MiB/s [2024-11-06T14:31:21.808Z] 10367.07 IOPS, 40.50 MiB/s [2024-11-06T14:31:21.808Z] 9675.93 IOPS, 37.80 MiB/s [2024-11-06T14:31:21.808Z] 9223.81 IOPS, 36.03 MiB/s [2024-11-06T14:31:21.808Z] 9336.00 IOPS, 36.47 MiB/s [2024-11-06T14:31:21.808Z] 9395.78 IOPS, 36.70 MiB/s [2024-11-06T14:31:21.808Z] 9654.47 IOPS, 37.71 MiB/s [2024-11-06T14:31:21.808Z] 9868.10 IOPS, 38.55 MiB/s [2024-11-06T14:31:21.808Z] 10061.81 IOPS, 39.30 MiB/s [2024-11-06T14:31:21.808Z] 10097.68 IOPS, 39.44 MiB/s [2024-11-06T14:31:21.808Z] 10123.35 IOPS, 39.54 MiB/s [2024-11-06T14:31:21.808Z] 10186.58 IOPS, 39.79 MiB/s [2024-11-06T14:31:21.808Z] 10358.40 IOPS, 40.46 MiB/s [2024-11-06T14:31:21.808Z] 10475.46 IOPS, 40.92 MiB/s [2024-11-06T14:31:21.808Z] [2024-11-06 14:31:18.792476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:86280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.173 [2024-11-06 14:31:18.792529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:54.173 [2024-11-06 14:31:18.792583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:86312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.173 [2024-11-06 14:31:18.792599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:54.173 [2024-11-06 14:31:18.792621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:86472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.173 [2024-11-06 14:31:18.792635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:54.173 [2024-11-06 14:31:18.792655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.173 [2024-11-06 14:31:18.792668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:54.173 [2024-11-06 14:31:18.792688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:86504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.173 [2024-11-06 14:31:18.792700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:54.173 [2024-11-06 14:31:18.792720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:86520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.173 [2024-11-06 14:31:18.792732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:54.173 [2024-11-06 14:31:18.792751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:86536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.173 [2024-11-06 14:31:18.792763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:54.173 [2024-11-06 14:31:18.792783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:86552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.173 [2024-11-06 14:31:18.792795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:54.173 [2024-11-06 14:31:18.792815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:86568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.173 [2024-11-06 14:31:18.792849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:54.173 [2024-11-06 14:31:18.792870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:86584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.173 [2024-11-06 14:31:18.792882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:54.173 [2024-11-06 14:31:18.792902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:86600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.173 [2024-11-06 14:31:18.792914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:54.173 [2024-11-06 14:31:18.792933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:86616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.173 [2024-11-06 14:31:18.792945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:54.173 [2024-11-06 14:31:18.792978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:86632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.173 [2024-11-06 14:31:18.792990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:54.173 [2024-11-06 14:31:18.793010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:86648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.173 [2024-11-06 14:31:18.793022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:54.173 [2024-11-06 14:31:18.793042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:86664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.173 [2024-11-06 14:31:18.793054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:54.173 [2024-11-06 14:31:18.793074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:86680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.173 [2024-11-06 14:31:18.793086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.173 [2024-11-06 14:31:18.793106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:86696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.173 [2024-11-06 14:31:18.793118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.173 [2024-11-06 14:31:18.793139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:86712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.173 [2024-11-06 14:31:18.793151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:54.173 [2024-11-06 14:31:18.793171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:86728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.173 [2024-11-06 14:31:18.793183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:54.173 [2024-11-06 14:31:18.793203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:86344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.173 [2024-11-06 14:31:18.793217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:54.173 [2024-11-06 14:31:18.793237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:86376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.173 [2024-11-06 14:31:18.793249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:54.173 [2024-11-06 14:31:18.793278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:86416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.173 [2024-11-06 14:31:18.793291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:54.173 [2024-11-06 14:31:18.795118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.173 [2024-11-06 14:31:18.795144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:54.173 [2024-11-06 14:31:18.795170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:86752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.173 [2024-11-06 14:31:18.795183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:54.173 [2024-11-06 14:31:18.795204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:86768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.173 [2024-11-06 14:31:18.795217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:54.173 [2024-11-06 14:31:18.795238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:86784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.173 [2024-11-06 14:31:18.795251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:54.173 [2024-11-06 14:31:18.795271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:86800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.173 [2024-11-06 14:31:18.795285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:54.173 [2024-11-06 14:31:18.795305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:86816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.173 [2024-11-06 14:31:18.795318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:54.173 [2024-11-06 14:31:18.795339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:86832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.173 [2024-11-06 14:31:18.795351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:54.173 [2024-11-06 14:31:18.795372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:86848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.173 [2024-11-06 14:31:18.795384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:54.173 [2024-11-06 14:31:18.795404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:86864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.173 [2024-11-06 14:31:18.795417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:54.173 [2024-11-06 14:31:18.795437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:86880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.173 [2024-11-06 14:31:18.795449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:54.173 [2024-11-06 14:31:18.795469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:86896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.173 [2024-11-06 14:31:18.795482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:54.173 [2024-11-06 14:31:18.795511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:86912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.174 [2024-11-06 14:31:18.795524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:54.174 [2024-11-06 14:31:18.795545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:86928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.174 [2024-11-06 14:31:18.795557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:54.174 [2024-11-06 14:31:18.795578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:86944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.174 [2024-11-06 14:31:18.795590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:54.174 [2024-11-06 14:31:18.795611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.174 [2024-11-06 14:31:18.795623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:54.174 [2024-11-06 14:31:18.795643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:86976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.174 [2024-11-06 14:31:18.795656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:54.174 [2024-11-06 14:31:18.795677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:86992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.174 [2024-11-06 14:31:18.795689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:54.174 [2024-11-06 14:31:18.795709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:87008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.174 [2024-11-06 14:31:18.795722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:54.174 [2024-11-06 14:31:18.795743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:87024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.174 [2024-11-06 14:31:18.795756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:54.174 [2024-11-06 14:31:18.795776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:87040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.174 [2024-11-06 14:31:18.795788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:54.174 [2024-11-06 14:31:18.795808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:87056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.174 [2024-11-06 14:31:18.795820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:54.174 [2024-11-06 14:31:18.795840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:87072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.174 [2024-11-06 14:31:18.795853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:54.174 [2024-11-06 14:31:18.795873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:87088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.174 [2024-11-06 14:31:18.795885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:54.174 [2024-11-06 14:31:18.795905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.174 [2024-11-06 14:31:18.795922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:54.174 [2024-11-06 14:31:18.795964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:86352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.174 [2024-11-06 14:31:18.795978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:54.174 [2024-11-06 14:31:18.795999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:86384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.174 [2024-11-06 14:31:18.796012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:54.174 [2024-11-06 14:31:18.796033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:86408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.174 [2024-11-06 14:31:18.796045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:54.174 [2024-11-06 14:31:18.796066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:86440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.174 [2024-11-06 14:31:18.796079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:54.174 [2024-11-06 14:31:18.796099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:87112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.174 [2024-11-06 14:31:18.796111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:54.174 [2024-11-06 14:31:18.796131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:87128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.174 [2024-11-06 14:31:18.796143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:54.174 [2024-11-06 14:31:18.796165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:87144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.174 [2024-11-06 14:31:18.796179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:54.174 [2024-11-06 14:31:18.796199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:87160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.174 [2024-11-06 14:31:18.796211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:54.174 [2024-11-06 14:31:18.796232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:87176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.174 [2024-11-06 14:31:18.796263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:54.174 [2024-11-06 14:31:18.796285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.174 [2024-11-06 14:31:18.796298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:54.174 [2024-11-06 14:31:18.796317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.174 [2024-11-06 14:31:18.796330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:54.174 [2024-11-06 14:31:18.796350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:87216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.174 [2024-11-06 14:31:18.796368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:54.174 [2024-11-06 14:31:18.796389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:87232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.174 [2024-11-06 14:31:18.796401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:54.174 [2024-11-06 14:31:18.796421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:87248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.174 [2024-11-06 14:31:18.796433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:54.174 [2024-11-06 14:31:18.796453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:87264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.174 [2024-11-06 14:31:18.796466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:54.174 10490.89 IOPS, 40.98 MiB/s [2024-11-06T14:31:21.809Z] 10508.14 IOPS, 41.05 MiB/s [2024-11-06T14:31:21.809Z] Received shutdown signal, test time was about 28.327822 seconds 00:26:54.174 00:26:54.174 Latency(us) 00:26:54.174 [2024-11-06T14:31:21.809Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:54.174 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:54.174 Verification LBA range: start 0x0 length 0x4000 00:26:54.174 Nvme0n1 : 28.33 10510.41 41.06 0.00 0.00 12153.95 143.36 3019898.88 00:26:54.174 [2024-11-06T14:31:21.809Z] =================================================================================================================== 00:26:54.174 [2024-11-06T14:31:21.809Z] Total : 10510.41 41.06 0.00 0.00 12153.95 143.36 3019898.88 00:26:54.174 14:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:54.174 14:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:26:54.174 14:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:26:54.175 14:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:26:54.175 14:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:54.175 14:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:26:54.433 14:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:54.433 14:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:26:54.433 14:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:54.433 14:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:54.433 rmmod nvme_tcp 00:26:54.433 rmmod nvme_fabrics 00:26:54.433 rmmod nvme_keyring 00:26:54.433 14:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:54.433 14:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:26:54.433 14:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:26:54.433 14:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 98757 ']' 00:26:54.433 14:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 98757 00:26:54.433 14:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 98757 ']' 00:26:54.433 14:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 98757 00:26:54.433 14:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:26:54.433 14:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:54.433 14:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 98757 00:26:54.433 killing process with pid 98757 00:26:54.433 14:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:54.433 14:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:54.433 14:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 98757' 00:26:54.433 14:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 98757 00:26:54.433 14:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 98757 00:26:55.000 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:55.000 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:55.000 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:55.000 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:26:55.000 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:26:55.000 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:55.000 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:26:55.000 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:55.000 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:55.000 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:55.000 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:55.000 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:55.258 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:55.258 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:55.258 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:55.258 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:55.258 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:55.258 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:55.258 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:55.258 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:55.258 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:55.258 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:55.258 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:55.258 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:55.258 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:55.258 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:55.258 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:26:55.258 00:26:55.258 real 0m34.654s 00:26:55.258 user 1m51.197s 00:26:55.258 sys 0m6.325s 00:26:55.258 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:55.258 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:55.258 ************************************ 00:26:55.258 END TEST nvmf_host_multipath_status 00:26:55.258 ************************************ 00:26:55.258 14:31:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:55.258 14:31:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:55.258 14:31:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:55.258 14:31:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.258 ************************************ 00:26:55.258 START TEST nvmf_discovery_remove_ifc 00:26:55.258 ************************************ 00:26:55.258 14:31:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:55.521 * Looking for test storage... 00:26:55.521 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:55.521 14:31:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:55.521 14:31:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:55.521 14:31:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lcov --version 00:26:55.521 14:31:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:55.521 14:31:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:55.521 14:31:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:55.521 14:31:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:55.521 14:31:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:26:55.521 14:31:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:26:55.521 14:31:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:26:55.521 14:31:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:26:55.521 14:31:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:26:55.521 14:31:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:26:55.521 14:31:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:26:55.521 14:31:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:55.521 14:31:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:26:55.521 14:31:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:26:55.521 14:31:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:55.521 14:31:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:55.521 14:31:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:26:55.521 14:31:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:26:55.521 14:31:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:55.521 14:31:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:26:55.521 14:31:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:26:55.521 14:31:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:26:55.521 14:31:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:26:55.521 14:31:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:55.521 14:31:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:26:55.521 14:31:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:26:55.521 14:31:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:55.521 14:31:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:55.521 14:31:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:26:55.521 14:31:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:55.521 14:31:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:55.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:55.521 --rc genhtml_branch_coverage=1 00:26:55.521 --rc genhtml_function_coverage=1 00:26:55.521 --rc genhtml_legend=1 00:26:55.521 --rc geninfo_all_blocks=1 00:26:55.521 --rc geninfo_unexecuted_blocks=1 00:26:55.521 00:26:55.521 ' 00:26:55.521 14:31:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:55.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:55.521 --rc genhtml_branch_coverage=1 00:26:55.521 --rc genhtml_function_coverage=1 00:26:55.521 --rc genhtml_legend=1 00:26:55.521 --rc geninfo_all_blocks=1 00:26:55.521 --rc geninfo_unexecuted_blocks=1 00:26:55.521 00:26:55.521 ' 00:26:55.521 14:31:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:55.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:55.521 --rc genhtml_branch_coverage=1 00:26:55.521 --rc genhtml_function_coverage=1 00:26:55.521 --rc genhtml_legend=1 00:26:55.521 --rc geninfo_all_blocks=1 00:26:55.521 --rc geninfo_unexecuted_blocks=1 00:26:55.521 00:26:55.521 ' 00:26:55.521 14:31:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:55.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:55.521 --rc genhtml_branch_coverage=1 00:26:55.521 --rc genhtml_function_coverage=1 00:26:55.521 --rc genhtml_legend=1 00:26:55.521 --rc geninfo_all_blocks=1 00:26:55.521 --rc geninfo_unexecuted_blocks=1 00:26:55.521 00:26:55.521 ' 00:26:55.521 14:31:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:55.521 14:31:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:26:55.522 14:31:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:55.522 14:31:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:55.522 14:31:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:55.522 14:31:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:55.522 14:31:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:55.522 14:31:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:55.522 14:31:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:55.522 14:31:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:55.522 14:31:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:55.522 14:31:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:55.522 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:26:55.522 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:26:55.522 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:55.522 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:55.522 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:55.522 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:55.522 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:55.522 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:26:55.522 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:55.522 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:55.522 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:55.522 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.522 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.522 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.522 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:26:55.522 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.522 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:26:55.522 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:55.522 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:55.522 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:55.522 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:55.522 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:55.522 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:55.522 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:55.522 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:55.522 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:55.522 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:55.522 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:55.522 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:55.522 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:55.522 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:55.522 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:55.522 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:55.522 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:55.522 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:55.522 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:55.522 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:55.522 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:55.522 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:55.522 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:55.522 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:55.522 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:55.522 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:26:55.522 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:26:55.522 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:26:55.522 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:26:55.522 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:26:55.522 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:26:55.522 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:55.523 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:26:55.523 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:26:55.523 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:55.523 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:55.523 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:26:55.523 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:55.523 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:26:55.523 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:55.523 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:26:55.523 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:55.523 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:55.523 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:55.523 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:55.523 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:55.523 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:55.523 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:26:55.523 Cannot find device "nvmf_init_br" 00:26:55.523 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:26:55.523 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:26:55.523 Cannot find device "nvmf_init_br2" 00:26:55.523 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:26:55.523 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:26:55.523 Cannot find device "nvmf_tgt_br" 00:26:55.523 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:26:55.523 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:26:55.523 Cannot find device "nvmf_tgt_br2" 00:26:55.523 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:26:55.523 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:26:55.523 Cannot find device "nvmf_init_br" 00:26:55.523 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:26:55.523 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:26:55.523 Cannot find device "nvmf_init_br2" 00:26:55.523 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:26:55.523 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:26:55.523 Cannot find device "nvmf_tgt_br" 00:26:55.523 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:26:55.523 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:26:55.523 Cannot find device "nvmf_tgt_br2" 00:26:55.523 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:26:55.523 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:26:55.523 Cannot find device "nvmf_br" 00:26:55.523 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:26:55.523 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:26:55.523 Cannot find device "nvmf_init_if" 00:26:55.523 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:26:55.523 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:26:55.523 Cannot find device "nvmf_init_if2" 00:26:55.523 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:26:55.523 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:55.523 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:55.523 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:26:55.523 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:55.523 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:55.523 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:26:55.523 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:26:55.523 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:55.523 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:26:55.523 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:55.781 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:55.781 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:55.781 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:55.781 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:55.781 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:26:55.781 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:26:55.781 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:26:55.781 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:26:55.781 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:26:55.781 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:26:55.781 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:26:55.781 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:26:55.781 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:26:55.781 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:55.781 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:55.781 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:55.781 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:26:55.781 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:26:55.782 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:26:55.782 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:26:55.782 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:55.782 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:55.782 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:55.782 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:26:55.782 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:26:55.782 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:26:55.782 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:55.782 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:55.782 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:26:55.782 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:55.782 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:26:55.782 00:26:55.782 --- 10.0.0.3 ping statistics --- 00:26:55.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:55.782 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:26:55.782 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:26:55.782 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:55.782 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:26:55.782 00:26:55.782 --- 10.0.0.4 ping statistics --- 00:26:55.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:55.782 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:26:55.782 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:55.782 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:55.782 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.062 ms 00:26:55.782 00:26:55.782 --- 10.0.0.1 ping statistics --- 00:26:55.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:55.782 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:26:55.782 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:26:55.782 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:55.782 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:26:55.782 00:26:55.782 --- 10.0.0.2 ping statistics --- 00:26:55.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:55.782 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:26:55.782 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:55.782 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@461 -- # return 0 00:26:55.782 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:55.782 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:55.782 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:55.782 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:55.782 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:55.782 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:55.782 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:55.782 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:26:55.782 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:55.782 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:55.782 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:55.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:55.782 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=100172 00:26:55.782 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 100172 00:26:55.782 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 100172 ']' 00:26:55.782 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:55.782 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:55.782 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:55.782 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:55.782 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:55.782 14:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:55.782 [2024-11-06 14:31:23.394055] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:26:55.782 [2024-11-06 14:31:23.394595] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:56.039 [2024-11-06 14:31:23.547591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:56.039 [2024-11-06 14:31:23.653544] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:56.039 [2024-11-06 14:31:23.653718] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:56.039 [2024-11-06 14:31:23.653738] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:56.039 [2024-11-06 14:31:23.653754] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:56.039 [2024-11-06 14:31:23.653764] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:56.039 [2024-11-06 14:31:23.654662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:56.973 14:31:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:56.973 14:31:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:26:56.973 14:31:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:56.973 14:31:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:56.973 14:31:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:56.973 14:31:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:56.973 14:31:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:26:56.973 14:31:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.973 14:31:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:56.973 [2024-11-06 14:31:24.292188] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:56.973 [2024-11-06 14:31:24.300372] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:26:56.973 null0 00:26:56.973 [2024-11-06 14:31:24.332314] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:56.973 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:56.973 14:31:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.973 14:31:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=100222 00:26:56.973 14:31:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 100222 /tmp/host.sock 00:26:56.973 14:31:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 100222 ']' 00:26:56.973 14:31:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:56.973 14:31:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:26:56.973 14:31:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:56.973 14:31:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:56.973 14:31:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:56.973 14:31:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:56.973 [2024-11-06 14:31:24.427049] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:26:56.973 [2024-11-06 14:31:24.427350] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100222 ] 00:26:56.973 [2024-11-06 14:31:24.580063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:57.231 [2024-11-06 14:31:24.675499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:57.797 14:31:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:57.797 14:31:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:26:57.797 14:31:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:57.797 14:31:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:26:57.797 14:31:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.797 14:31:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:57.797 14:31:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.797 14:31:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:26:57.797 14:31:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.797 14:31:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:58.056 14:31:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.056 14:31:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:26:58.056 14:31:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.056 14:31:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:58.990 [2024-11-06 14:31:26.516371] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:26:58.990 [2024-11-06 14:31:26.516507] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:26:58.990 [2024-11-06 14:31:26.516542] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:26:58.990 [2024-11-06 14:31:26.602510] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:26:59.271 [2024-11-06 14:31:26.656961] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:26:59.271 [2024-11-06 14:31:26.657862] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x61500002b500:1 started. 00:26:59.271 [2024-11-06 14:31:26.659321] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:59.271 [2024-11-06 14:31:26.659374] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:59.271 [2024-11-06 14:31:26.659411] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:59.271 [2024-11-06 14:31:26.659428] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:26:59.271 [2024-11-06 14:31:26.659454] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:26:59.271 14:31:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.271 14:31:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:59.271 14:31:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:59.271 [2024-11-06 14:31:26.664927] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x61500002b500 was disconnected and freed. delete nvme_qpair. 00:26:59.271 14:31:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:59.271 14:31:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:59.271 14:31:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.271 14:31:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:59.271 14:31:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:59.271 14:31:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:59.271 14:31:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.271 14:31:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:59.271 14:31:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:26:59.271 14:31:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:26:59.271 14:31:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:59.271 14:31:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:59.271 14:31:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:59.271 14:31:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.271 14:31:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:59.271 14:31:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:59.271 14:31:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:59.271 14:31:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:59.271 14:31:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.271 14:31:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:59.271 14:31:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:00.203 14:31:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:00.203 14:31:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:00.203 14:31:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:00.203 14:31:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.203 14:31:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:00.203 14:31:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:00.203 14:31:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:00.203 14:31:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.203 14:31:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:00.203 14:31:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:01.575 14:31:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:01.575 14:31:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:01.575 14:31:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:01.575 14:31:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.575 14:31:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:01.575 14:31:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:01.575 14:31:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:01.575 14:31:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.575 14:31:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:01.576 14:31:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:02.514 14:31:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:02.514 14:31:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:02.514 14:31:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:02.514 14:31:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:02.514 14:31:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:02.514 14:31:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.514 14:31:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:02.514 14:31:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.514 14:31:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:02.514 14:31:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:03.447 14:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:03.447 14:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:03.447 14:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:03.447 14:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.447 14:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:03.447 14:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:03.447 14:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:03.447 14:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.447 14:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:03.447 14:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:04.381 14:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:04.381 14:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:04.381 14:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:04.381 14:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:04.381 14:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.381 14:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:04.381 14:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:04.381 14:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.381 14:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:04.381 14:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:04.641 [2024-11-06 14:31:32.087459] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:27:04.641 [2024-11-06 14:31:32.087517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:04.641 [2024-11-06 14:31:32.087530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.641 [2024-11-06 14:31:32.087545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:04.641 [2024-11-06 14:31:32.087552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.641 [2024-11-06 14:31:32.087559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:04.641 [2024-11-06 14:31:32.087566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.641 [2024-11-06 14:31:32.087573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:04.641 [2024-11-06 14:31:32.087580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.641 [2024-11-06 14:31:32.087588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:04.641 [2024-11-06 14:31:32.087594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.641 [2024-11-06 14:31:32.087601] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b000 is same with the state(6) to be set 00:27:04.641 [2024-11-06 14:31:32.097453] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b000 (9): Bad file descriptor 00:27:04.641 [2024-11-06 14:31:32.107471] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:04.641 [2024-11-06 14:31:32.107496] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:04.641 [2024-11-06 14:31:32.107503] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:04.641 [2024-11-06 14:31:32.107508] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:04.641 [2024-11-06 14:31:32.107543] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:05.581 14:31:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:05.582 14:31:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:05.582 14:31:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.582 14:31:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:05.582 14:31:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:05.582 14:31:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:05.582 14:31:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:05.582 [2024-11-06 14:31:33.127095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:27:05.582 [2024-11-06 14:31:33.127242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002b000 with addr=10.0.0.3, port=4420 00:27:05.582 [2024-11-06 14:31:33.127286] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b000 is same with the state(6) to be set 00:27:05.582 [2024-11-06 14:31:33.127378] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b000 (9): Bad file descriptor 00:27:05.582 [2024-11-06 14:31:33.128799] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:27:05.582 [2024-11-06 14:31:33.128904] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:05.582 [2024-11-06 14:31:33.128930] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:05.582 [2024-11-06 14:31:33.128979] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:05.582 [2024-11-06 14:31:33.129007] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:05.582 [2024-11-06 14:31:33.129039] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:05.582 [2024-11-06 14:31:33.129057] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:05.582 [2024-11-06 14:31:33.129081] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:05.582 [2024-11-06 14:31:33.129098] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:05.582 14:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.582 14:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:05.582 14:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:06.524 [2024-11-06 14:31:34.129202] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:06.524 [2024-11-06 14:31:34.129241] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:06.524 [2024-11-06 14:31:34.129262] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:06.524 [2024-11-06 14:31:34.129272] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:06.524 [2024-11-06 14:31:34.129281] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:27:06.524 [2024-11-06 14:31:34.129290] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:06.524 [2024-11-06 14:31:34.129297] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:06.524 [2024-11-06 14:31:34.129303] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:06.524 [2024-11-06 14:31:34.129345] bdev_nvme.c:7135:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:27:06.524 [2024-11-06 14:31:34.129385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:06.524 [2024-11-06 14:31:34.129399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.524 [2024-11-06 14:31:34.129414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:06.524 [2024-11-06 14:31:34.129423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.524 [2024-11-06 14:31:34.129432] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:06.524 [2024-11-06 14:31:34.129440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.524 [2024-11-06 14:31:34.129449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:06.524 [2024-11-06 14:31:34.129458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.524 [2024-11-06 14:31:34.129467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:06.524 [2024-11-06 14:31:34.129475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.524 [2024-11-06 14:31:34.129484] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:27:06.524 [2024-11-06 14:31:34.130127] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:27:06.524 [2024-11-06 14:31:34.131148] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:27:06.524 [2024-11-06 14:31:34.131172] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:27:06.524 14:31:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:06.783 14:31:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:06.783 14:31:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:06.783 14:31:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:06.783 14:31:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.783 14:31:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:06.783 14:31:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:06.783 14:31:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.783 14:31:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:27:06.783 14:31:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:27:06.783 14:31:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:06.783 14:31:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:27:06.783 14:31:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:06.783 14:31:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:06.783 14:31:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.783 14:31:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:06.783 14:31:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:06.783 14:31:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:06.783 14:31:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:06.783 14:31:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.783 14:31:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:06.783 14:31:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:07.718 14:31:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:07.718 14:31:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:07.718 14:31:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:07.718 14:31:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:07.718 14:31:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.718 14:31:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:07.718 14:31:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:07.718 14:31:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.718 14:31:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:07.718 14:31:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:08.652 [2024-11-06 14:31:36.137580] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:27:08.652 [2024-11-06 14:31:36.137608] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:27:08.652 [2024-11-06 14:31:36.137629] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:27:08.652 [2024-11-06 14:31:36.223698] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:27:08.652 [2024-11-06 14:31:36.278048] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4420 00:27:08.652 [2024-11-06 14:31:36.278745] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x61500002c180:1 started. 00:27:08.652 [2024-11-06 14:31:36.280221] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:08.652 [2024-11-06 14:31:36.280271] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:08.652 [2024-11-06 14:31:36.280309] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:08.652 [2024-11-06 14:31:36.280326] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:27:08.652 [2024-11-06 14:31:36.280335] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:27:08.652 [2024-11-06 14:31:36.286338] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x61500002c180 was disconnected and freed. delete nvme_qpair. 00:27:08.910 14:31:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:08.910 14:31:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:08.910 14:31:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:08.910 14:31:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:08.910 14:31:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.910 14:31:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:08.910 14:31:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:08.910 14:31:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.910 14:31:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:27:08.910 14:31:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:27:08.910 14:31:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 100222 00:27:08.910 14:31:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 100222 ']' 00:27:08.910 14:31:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 100222 00:27:08.910 14:31:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:27:08.910 14:31:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:08.910 14:31:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 100222 00:27:08.910 14:31:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:08.910 14:31:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:08.910 killing process with pid 100222 00:27:08.910 14:31:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 100222' 00:27:08.910 14:31:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 100222 00:27:08.910 14:31:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 100222 00:27:09.491 14:31:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:27:09.491 14:31:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:09.491 14:31:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:27:09.491 14:31:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:09.491 14:31:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:27:09.491 14:31:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:09.491 14:31:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:09.491 rmmod nvme_tcp 00:27:09.491 rmmod nvme_fabrics 00:27:09.491 rmmod nvme_keyring 00:27:09.491 14:31:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:09.491 14:31:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:27:09.491 14:31:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:27:09.491 14:31:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 100172 ']' 00:27:09.491 14:31:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 100172 00:27:09.491 14:31:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 100172 ']' 00:27:09.491 14:31:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 100172 00:27:09.491 14:31:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:27:09.491 14:31:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:09.491 14:31:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 100172 00:27:09.491 14:31:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:27:09.491 14:31:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:27:09.491 14:31:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 100172' 00:27:09.491 killing process with pid 100172 00:27:09.491 14:31:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 100172 00:27:09.491 14:31:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 100172 00:27:10.082 14:31:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:10.082 14:31:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:10.082 14:31:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:10.082 14:31:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:27:10.082 14:31:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:27:10.082 14:31:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:10.082 14:31:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:27:10.082 14:31:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:10.082 14:31:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:27:10.082 14:31:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:27:10.082 14:31:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:27:10.082 14:31:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:27:10.082 14:31:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:27:10.082 14:31:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:27:10.082 14:31:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:27:10.082 14:31:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:27:10.082 14:31:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:27:10.082 14:31:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:27:10.343 14:31:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:27:10.343 14:31:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:27:10.343 14:31:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:10.343 14:31:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:10.343 14:31:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:27:10.343 14:31:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:10.343 14:31:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:10.343 14:31:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:10.343 14:31:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:27:10.343 00:27:10.343 real 0m14.976s 00:27:10.343 user 0m26.118s 00:27:10.343 sys 0m1.430s 00:27:10.343 ************************************ 00:27:10.343 END TEST nvmf_discovery_remove_ifc 00:27:10.343 ************************************ 00:27:10.343 14:31:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:10.343 14:31:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:10.343 14:31:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:10.343 14:31:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:27:10.343 14:31:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:10.343 14:31:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.343 ************************************ 00:27:10.343 START TEST nvmf_identify_kernel_target 00:27:10.343 ************************************ 00:27:10.343 14:31:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:10.343 * Looking for test storage... 00:27:10.343 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:27:10.343 14:31:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:10.343 14:31:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lcov --version 00:27:10.343 14:31:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:10.605 14:31:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:10.605 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:10.605 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:10.605 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:10.605 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:27:10.605 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:27:10.605 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:27:10.605 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:27:10.605 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:27:10.605 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:27:10.605 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:27:10.605 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:10.605 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:27:10.605 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:27:10.605 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:10.605 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:10.605 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:27:10.605 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:27:10.605 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:10.605 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:27:10.605 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:27:10.605 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:27:10.605 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:27:10.605 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:10.605 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:27:10.605 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:27:10.605 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:10.605 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:10.605 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:27:10.605 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:10.605 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:10.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:10.605 --rc genhtml_branch_coverage=1 00:27:10.605 --rc genhtml_function_coverage=1 00:27:10.605 --rc genhtml_legend=1 00:27:10.605 --rc geninfo_all_blocks=1 00:27:10.605 --rc geninfo_unexecuted_blocks=1 00:27:10.605 00:27:10.605 ' 00:27:10.605 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:10.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:10.605 --rc genhtml_branch_coverage=1 00:27:10.605 --rc genhtml_function_coverage=1 00:27:10.605 --rc genhtml_legend=1 00:27:10.605 --rc geninfo_all_blocks=1 00:27:10.605 --rc geninfo_unexecuted_blocks=1 00:27:10.605 00:27:10.605 ' 00:27:10.605 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:10.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:10.605 --rc genhtml_branch_coverage=1 00:27:10.605 --rc genhtml_function_coverage=1 00:27:10.605 --rc genhtml_legend=1 00:27:10.605 --rc geninfo_all_blocks=1 00:27:10.605 --rc geninfo_unexecuted_blocks=1 00:27:10.605 00:27:10.605 ' 00:27:10.605 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:10.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:10.605 --rc genhtml_branch_coverage=1 00:27:10.605 --rc genhtml_function_coverage=1 00:27:10.605 --rc genhtml_legend=1 00:27:10.605 --rc geninfo_all_blocks=1 00:27:10.605 --rc geninfo_unexecuted_blocks=1 00:27:10.605 00:27:10.605 ' 00:27:10.605 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:10.605 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:27:10.605 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:10.605 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:10.605 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:10.605 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:10.605 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:10.605 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:10.605 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:10.605 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:10.605 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:10.605 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:10.605 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:27:10.605 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:27:10.605 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:10.605 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:10.605 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:10.605 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:10.606 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:27:10.606 Cannot find device "nvmf_init_br" 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:27:10.606 Cannot find device "nvmf_init_br2" 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:27:10.606 Cannot find device "nvmf_tgt_br" 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:27:10.606 Cannot find device "nvmf_tgt_br2" 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:27:10.606 Cannot find device "nvmf_init_br" 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:27:10.606 Cannot find device "nvmf_init_br2" 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:27:10.606 Cannot find device "nvmf_tgt_br" 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:27:10.606 Cannot find device "nvmf_tgt_br2" 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:27:10.606 Cannot find device "nvmf_br" 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:27:10.606 Cannot find device "nvmf_init_if" 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:27:10.606 Cannot find device "nvmf_init_if2" 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:10.606 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:10.606 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:10.606 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:10.607 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:10.607 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:10.868 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:10.868 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:27:10.868 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:27:10.868 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:27:10.868 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:27:10.868 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:27:10.868 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:27:10.868 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:27:10.868 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:27:10.868 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:27:10.868 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:10.868 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:10.868 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:10.868 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:27:10.868 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:27:10.868 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:27:10.868 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:27:10.868 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:10.868 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:10.868 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:10.868 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:27:10.868 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:27:10.868 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:27:10.868 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:10.868 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:27:10.868 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:27:10.868 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:10.868 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:27:10.868 00:27:10.868 --- 10.0.0.3 ping statistics --- 00:27:10.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:10.868 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:27:10.868 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:27:10.868 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:27:10.868 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:27:10.868 00:27:10.868 --- 10.0.0.4 ping statistics --- 00:27:10.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:10.868 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:27:10.868 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:10.868 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:10.868 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:27:10.868 00:27:10.868 --- 10.0.0.1 ping statistics --- 00:27:10.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:10.868 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:27:10.868 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:27:10.868 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:10.868 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:27:10.868 00:27:10.868 --- 10.0.0.2 ping statistics --- 00:27:10.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:10.868 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:27:10.869 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:10.869 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # return 0 00:27:10.869 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:10.869 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:10.869 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:10.869 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:10.869 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:10.869 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:10.869 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:10.869 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:27:10.869 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:27:10.869 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:27:10.869 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:10.869 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:10.869 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:10.869 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:10.869 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:10.869 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:10.869 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:10.869 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:10.869 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:10.869 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:27:10.869 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:10.869 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:10.869 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:27:10.869 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:10.869 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:10.869 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:10.869 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:27:10.869 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:27:10.869 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:27:10.869 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:10.869 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:11.130 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:11.130 Waiting for block devices as requested 00:27:11.130 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:27:11.392 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:27:11.392 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:27:11.392 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:11.392 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:27:11.392 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:27:11.392 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:11.392 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:27:11.392 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:27:11.392 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:27:11.392 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:27:11.392 No valid GPT data, bailing 00:27:11.392 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:11.392 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:27:11.392 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:27:11.392 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:27:11.392 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:27:11.392 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:27:11.392 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:27:11.392 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:27:11.392 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:27:11.392 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:27:11.392 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:27:11.392 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:27:11.392 14:31:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:27:11.392 No valid GPT data, bailing 00:27:11.392 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:27:11.654 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:27:11.654 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:27:11.654 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:27:11.654 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:27:11.654 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:27:11.654 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:27:11.654 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:27:11.654 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:27:11.654 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:27:11.654 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:27:11.654 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:27:11.654 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:27:11.654 No valid GPT data, bailing 00:27:11.654 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:27:11.654 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:27:11.654 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:27:11.654 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:27:11.654 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:27:11.654 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:27:11.654 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:27:11.654 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:27:11.654 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:27:11.654 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:27:11.654 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:27:11.654 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:27:11.654 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:27:11.654 No valid GPT data, bailing 00:27:11.654 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:27:11.654 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:27:11.654 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:27:11.654 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:27:11.654 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:27:11.654 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:11.654 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:11.654 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:11.654 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:11.654 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:27:11.654 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:27:11.654 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:27:11.654 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:27:11.654 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:27:11.654 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:27:11.654 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:27:11.654 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:11.654 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid=9ca3beaa-4765-4780-96a0-c04fd97e7435 -a 10.0.0.1 -t tcp -s 4420 00:27:11.654 00:27:11.654 Discovery Log Number of Records 2, Generation counter 2 00:27:11.654 =====Discovery Log Entry 0====== 00:27:11.654 trtype: tcp 00:27:11.654 adrfam: ipv4 00:27:11.654 subtype: current discovery subsystem 00:27:11.654 treq: not specified, sq flow control disable supported 00:27:11.654 portid: 1 00:27:11.654 trsvcid: 4420 00:27:11.654 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:11.654 traddr: 10.0.0.1 00:27:11.654 eflags: none 00:27:11.654 sectype: none 00:27:11.654 =====Discovery Log Entry 1====== 00:27:11.654 trtype: tcp 00:27:11.654 adrfam: ipv4 00:27:11.654 subtype: nvme subsystem 00:27:11.654 treq: not specified, sq flow control disable supported 00:27:11.654 portid: 1 00:27:11.654 trsvcid: 4420 00:27:11.654 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:11.654 traddr: 10.0.0.1 00:27:11.654 eflags: none 00:27:11.654 sectype: none 00:27:11.654 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:27:11.654 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:27:11.915 ===================================================== 00:27:11.915 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:11.915 ===================================================== 00:27:11.915 Controller Capabilities/Features 00:27:11.915 ================================ 00:27:11.915 Vendor ID: 0000 00:27:11.915 Subsystem Vendor ID: 0000 00:27:11.915 Serial Number: 0fb1873dd77bcd737a0b 00:27:11.915 Model Number: Linux 00:27:11.915 Firmware Version: 6.8.9-20 00:27:11.915 Recommended Arb Burst: 0 00:27:11.915 IEEE OUI Identifier: 00 00 00 00:27:11.915 Multi-path I/O 00:27:11.915 May have multiple subsystem ports: No 00:27:11.915 May have multiple controllers: No 00:27:11.915 Associated with SR-IOV VF: No 00:27:11.915 Max Data Transfer Size: Unlimited 00:27:11.915 Max Number of Namespaces: 0 00:27:11.915 Max Number of I/O Queues: 1024 00:27:11.915 NVMe Specification Version (VS): 1.3 00:27:11.915 NVMe Specification Version (Identify): 1.3 00:27:11.915 Maximum Queue Entries: 1024 00:27:11.915 Contiguous Queues Required: No 00:27:11.915 Arbitration Mechanisms Supported 00:27:11.915 Weighted Round Robin: Not Supported 00:27:11.915 Vendor Specific: Not Supported 00:27:11.915 Reset Timeout: 7500 ms 00:27:11.915 Doorbell Stride: 4 bytes 00:27:11.915 NVM Subsystem Reset: Not Supported 00:27:11.915 Command Sets Supported 00:27:11.915 NVM Command Set: Supported 00:27:11.915 Boot Partition: Not Supported 00:27:11.915 Memory Page Size Minimum: 4096 bytes 00:27:11.915 Memory Page Size Maximum: 4096 bytes 00:27:11.915 Persistent Memory Region: Not Supported 00:27:11.915 Optional Asynchronous Events Supported 00:27:11.915 Namespace Attribute Notices: Not Supported 00:27:11.915 Firmware Activation Notices: Not Supported 00:27:11.915 ANA Change Notices: Not Supported 00:27:11.915 PLE Aggregate Log Change Notices: Not Supported 00:27:11.915 LBA Status Info Alert Notices: Not Supported 00:27:11.915 EGE Aggregate Log Change Notices: Not Supported 00:27:11.915 Normal NVM Subsystem Shutdown event: Not Supported 00:27:11.915 Zone Descriptor Change Notices: Not Supported 00:27:11.915 Discovery Log Change Notices: Supported 00:27:11.915 Controller Attributes 00:27:11.915 128-bit Host Identifier: Not Supported 00:27:11.915 Non-Operational Permissive Mode: Not Supported 00:27:11.915 NVM Sets: Not Supported 00:27:11.915 Read Recovery Levels: Not Supported 00:27:11.915 Endurance Groups: Not Supported 00:27:11.915 Predictable Latency Mode: Not Supported 00:27:11.915 Traffic Based Keep ALive: Not Supported 00:27:11.915 Namespace Granularity: Not Supported 00:27:11.915 SQ Associations: Not Supported 00:27:11.915 UUID List: Not Supported 00:27:11.915 Multi-Domain Subsystem: Not Supported 00:27:11.916 Fixed Capacity Management: Not Supported 00:27:11.916 Variable Capacity Management: Not Supported 00:27:11.916 Delete Endurance Group: Not Supported 00:27:11.916 Delete NVM Set: Not Supported 00:27:11.916 Extended LBA Formats Supported: Not Supported 00:27:11.916 Flexible Data Placement Supported: Not Supported 00:27:11.916 00:27:11.916 Controller Memory Buffer Support 00:27:11.916 ================================ 00:27:11.916 Supported: No 00:27:11.916 00:27:11.916 Persistent Memory Region Support 00:27:11.916 ================================ 00:27:11.916 Supported: No 00:27:11.916 00:27:11.916 Admin Command Set Attributes 00:27:11.916 ============================ 00:27:11.916 Security Send/Receive: Not Supported 00:27:11.916 Format NVM: Not Supported 00:27:11.916 Firmware Activate/Download: Not Supported 00:27:11.916 Namespace Management: Not Supported 00:27:11.916 Device Self-Test: Not Supported 00:27:11.916 Directives: Not Supported 00:27:11.916 NVMe-MI: Not Supported 00:27:11.916 Virtualization Management: Not Supported 00:27:11.916 Doorbell Buffer Config: Not Supported 00:27:11.916 Get LBA Status Capability: Not Supported 00:27:11.916 Command & Feature Lockdown Capability: Not Supported 00:27:11.916 Abort Command Limit: 1 00:27:11.916 Async Event Request Limit: 1 00:27:11.916 Number of Firmware Slots: N/A 00:27:11.916 Firmware Slot 1 Read-Only: N/A 00:27:11.916 Firmware Activation Without Reset: N/A 00:27:11.916 Multiple Update Detection Support: N/A 00:27:11.916 Firmware Update Granularity: No Information Provided 00:27:11.916 Per-Namespace SMART Log: No 00:27:11.916 Asymmetric Namespace Access Log Page: Not Supported 00:27:11.916 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:11.916 Command Effects Log Page: Not Supported 00:27:11.916 Get Log Page Extended Data: Supported 00:27:11.916 Telemetry Log Pages: Not Supported 00:27:11.916 Persistent Event Log Pages: Not Supported 00:27:11.916 Supported Log Pages Log Page: May Support 00:27:11.916 Commands Supported & Effects Log Page: Not Supported 00:27:11.916 Feature Identifiers & Effects Log Page:May Support 00:27:11.916 NVMe-MI Commands & Effects Log Page: May Support 00:27:11.916 Data Area 4 for Telemetry Log: Not Supported 00:27:11.916 Error Log Page Entries Supported: 1 00:27:11.916 Keep Alive: Not Supported 00:27:11.916 00:27:11.916 NVM Command Set Attributes 00:27:11.916 ========================== 00:27:11.916 Submission Queue Entry Size 00:27:11.916 Max: 1 00:27:11.916 Min: 1 00:27:11.916 Completion Queue Entry Size 00:27:11.916 Max: 1 00:27:11.916 Min: 1 00:27:11.916 Number of Namespaces: 0 00:27:11.916 Compare Command: Not Supported 00:27:11.916 Write Uncorrectable Command: Not Supported 00:27:11.916 Dataset Management Command: Not Supported 00:27:11.916 Write Zeroes Command: Not Supported 00:27:11.916 Set Features Save Field: Not Supported 00:27:11.916 Reservations: Not Supported 00:27:11.916 Timestamp: Not Supported 00:27:11.916 Copy: Not Supported 00:27:11.916 Volatile Write Cache: Not Present 00:27:11.916 Atomic Write Unit (Normal): 1 00:27:11.916 Atomic Write Unit (PFail): 1 00:27:11.916 Atomic Compare & Write Unit: 1 00:27:11.916 Fused Compare & Write: Not Supported 00:27:11.916 Scatter-Gather List 00:27:11.916 SGL Command Set: Supported 00:27:11.916 SGL Keyed: Not Supported 00:27:11.916 SGL Bit Bucket Descriptor: Not Supported 00:27:11.916 SGL Metadata Pointer: Not Supported 00:27:11.916 Oversized SGL: Not Supported 00:27:11.916 SGL Metadata Address: Not Supported 00:27:11.916 SGL Offset: Supported 00:27:11.916 Transport SGL Data Block: Not Supported 00:27:11.916 Replay Protected Memory Block: Not Supported 00:27:11.916 00:27:11.916 Firmware Slot Information 00:27:11.916 ========================= 00:27:11.916 Active slot: 0 00:27:11.916 00:27:11.916 00:27:11.916 Error Log 00:27:11.916 ========= 00:27:11.916 00:27:11.916 Active Namespaces 00:27:11.916 ================= 00:27:11.916 Discovery Log Page 00:27:11.916 ================== 00:27:11.916 Generation Counter: 2 00:27:11.916 Number of Records: 2 00:27:11.916 Record Format: 0 00:27:11.916 00:27:11.916 Discovery Log Entry 0 00:27:11.916 ---------------------- 00:27:11.916 Transport Type: 3 (TCP) 00:27:11.916 Address Family: 1 (IPv4) 00:27:11.916 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:11.916 Entry Flags: 00:27:11.916 Duplicate Returned Information: 0 00:27:11.916 Explicit Persistent Connection Support for Discovery: 0 00:27:11.916 Transport Requirements: 00:27:11.916 Secure Channel: Not Specified 00:27:11.916 Port ID: 1 (0x0001) 00:27:11.916 Controller ID: 65535 (0xffff) 00:27:11.916 Admin Max SQ Size: 32 00:27:11.916 Transport Service Identifier: 4420 00:27:11.916 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:11.916 Transport Address: 10.0.0.1 00:27:11.916 Discovery Log Entry 1 00:27:11.916 ---------------------- 00:27:11.916 Transport Type: 3 (TCP) 00:27:11.916 Address Family: 1 (IPv4) 00:27:11.916 Subsystem Type: 2 (NVM Subsystem) 00:27:11.916 Entry Flags: 00:27:11.916 Duplicate Returned Information: 0 00:27:11.916 Explicit Persistent Connection Support for Discovery: 0 00:27:11.916 Transport Requirements: 00:27:11.916 Secure Channel: Not Specified 00:27:11.916 Port ID: 1 (0x0001) 00:27:11.916 Controller ID: 65535 (0xffff) 00:27:11.916 Admin Max SQ Size: 32 00:27:11.916 Transport Service Identifier: 4420 00:27:11.916 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:27:11.916 Transport Address: 10.0.0.1 00:27:11.916 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:12.177 get_feature(0x01) failed 00:27:12.177 get_feature(0x02) failed 00:27:12.177 get_feature(0x04) failed 00:27:12.177 ===================================================== 00:27:12.177 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:12.178 ===================================================== 00:27:12.178 Controller Capabilities/Features 00:27:12.178 ================================ 00:27:12.178 Vendor ID: 0000 00:27:12.178 Subsystem Vendor ID: 0000 00:27:12.178 Serial Number: 0d98d7bc303db41d22b3 00:27:12.178 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:27:12.178 Firmware Version: 6.8.9-20 00:27:12.178 Recommended Arb Burst: 6 00:27:12.178 IEEE OUI Identifier: 00 00 00 00:27:12.178 Multi-path I/O 00:27:12.178 May have multiple subsystem ports: Yes 00:27:12.178 May have multiple controllers: Yes 00:27:12.178 Associated with SR-IOV VF: No 00:27:12.178 Max Data Transfer Size: Unlimited 00:27:12.178 Max Number of Namespaces: 1024 00:27:12.178 Max Number of I/O Queues: 128 00:27:12.178 NVMe Specification Version (VS): 1.3 00:27:12.178 NVMe Specification Version (Identify): 1.3 00:27:12.178 Maximum Queue Entries: 1024 00:27:12.178 Contiguous Queues Required: No 00:27:12.178 Arbitration Mechanisms Supported 00:27:12.178 Weighted Round Robin: Not Supported 00:27:12.178 Vendor Specific: Not Supported 00:27:12.178 Reset Timeout: 7500 ms 00:27:12.178 Doorbell Stride: 4 bytes 00:27:12.178 NVM Subsystem Reset: Not Supported 00:27:12.178 Command Sets Supported 00:27:12.178 NVM Command Set: Supported 00:27:12.178 Boot Partition: Not Supported 00:27:12.178 Memory Page Size Minimum: 4096 bytes 00:27:12.178 Memory Page Size Maximum: 4096 bytes 00:27:12.178 Persistent Memory Region: Not Supported 00:27:12.178 Optional Asynchronous Events Supported 00:27:12.178 Namespace Attribute Notices: Supported 00:27:12.178 Firmware Activation Notices: Not Supported 00:27:12.178 ANA Change Notices: Supported 00:27:12.178 PLE Aggregate Log Change Notices: Not Supported 00:27:12.178 LBA Status Info Alert Notices: Not Supported 00:27:12.178 EGE Aggregate Log Change Notices: Not Supported 00:27:12.178 Normal NVM Subsystem Shutdown event: Not Supported 00:27:12.178 Zone Descriptor Change Notices: Not Supported 00:27:12.178 Discovery Log Change Notices: Not Supported 00:27:12.178 Controller Attributes 00:27:12.178 128-bit Host Identifier: Supported 00:27:12.178 Non-Operational Permissive Mode: Not Supported 00:27:12.178 NVM Sets: Not Supported 00:27:12.178 Read Recovery Levels: Not Supported 00:27:12.178 Endurance Groups: Not Supported 00:27:12.178 Predictable Latency Mode: Not Supported 00:27:12.178 Traffic Based Keep ALive: Supported 00:27:12.178 Namespace Granularity: Not Supported 00:27:12.178 SQ Associations: Not Supported 00:27:12.178 UUID List: Not Supported 00:27:12.178 Multi-Domain Subsystem: Not Supported 00:27:12.178 Fixed Capacity Management: Not Supported 00:27:12.178 Variable Capacity Management: Not Supported 00:27:12.178 Delete Endurance Group: Not Supported 00:27:12.178 Delete NVM Set: Not Supported 00:27:12.178 Extended LBA Formats Supported: Not Supported 00:27:12.178 Flexible Data Placement Supported: Not Supported 00:27:12.178 00:27:12.178 Controller Memory Buffer Support 00:27:12.178 ================================ 00:27:12.178 Supported: No 00:27:12.178 00:27:12.178 Persistent Memory Region Support 00:27:12.178 ================================ 00:27:12.178 Supported: No 00:27:12.178 00:27:12.178 Admin Command Set Attributes 00:27:12.178 ============================ 00:27:12.178 Security Send/Receive: Not Supported 00:27:12.178 Format NVM: Not Supported 00:27:12.178 Firmware Activate/Download: Not Supported 00:27:12.178 Namespace Management: Not Supported 00:27:12.178 Device Self-Test: Not Supported 00:27:12.178 Directives: Not Supported 00:27:12.178 NVMe-MI: Not Supported 00:27:12.178 Virtualization Management: Not Supported 00:27:12.178 Doorbell Buffer Config: Not Supported 00:27:12.178 Get LBA Status Capability: Not Supported 00:27:12.178 Command & Feature Lockdown Capability: Not Supported 00:27:12.178 Abort Command Limit: 4 00:27:12.178 Async Event Request Limit: 4 00:27:12.178 Number of Firmware Slots: N/A 00:27:12.178 Firmware Slot 1 Read-Only: N/A 00:27:12.178 Firmware Activation Without Reset: N/A 00:27:12.178 Multiple Update Detection Support: N/A 00:27:12.178 Firmware Update Granularity: No Information Provided 00:27:12.178 Per-Namespace SMART Log: Yes 00:27:12.178 Asymmetric Namespace Access Log Page: Supported 00:27:12.178 ANA Transition Time : 10 sec 00:27:12.178 00:27:12.178 Asymmetric Namespace Access Capabilities 00:27:12.178 ANA Optimized State : Supported 00:27:12.178 ANA Non-Optimized State : Supported 00:27:12.178 ANA Inaccessible State : Supported 00:27:12.178 ANA Persistent Loss State : Supported 00:27:12.178 ANA Change State : Supported 00:27:12.178 ANAGRPID is not changed : No 00:27:12.178 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:27:12.178 00:27:12.178 ANA Group Identifier Maximum : 128 00:27:12.178 Number of ANA Group Identifiers : 128 00:27:12.178 Max Number of Allowed Namespaces : 1024 00:27:12.178 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:27:12.178 Command Effects Log Page: Supported 00:27:12.178 Get Log Page Extended Data: Supported 00:27:12.178 Telemetry Log Pages: Not Supported 00:27:12.178 Persistent Event Log Pages: Not Supported 00:27:12.178 Supported Log Pages Log Page: May Support 00:27:12.178 Commands Supported & Effects Log Page: Not Supported 00:27:12.178 Feature Identifiers & Effects Log Page:May Support 00:27:12.178 NVMe-MI Commands & Effects Log Page: May Support 00:27:12.178 Data Area 4 for Telemetry Log: Not Supported 00:27:12.178 Error Log Page Entries Supported: 128 00:27:12.178 Keep Alive: Supported 00:27:12.178 Keep Alive Granularity: 1000 ms 00:27:12.178 00:27:12.178 NVM Command Set Attributes 00:27:12.178 ========================== 00:27:12.178 Submission Queue Entry Size 00:27:12.178 Max: 64 00:27:12.178 Min: 64 00:27:12.178 Completion Queue Entry Size 00:27:12.178 Max: 16 00:27:12.178 Min: 16 00:27:12.178 Number of Namespaces: 1024 00:27:12.178 Compare Command: Not Supported 00:27:12.178 Write Uncorrectable Command: Not Supported 00:27:12.178 Dataset Management Command: Supported 00:27:12.178 Write Zeroes Command: Supported 00:27:12.178 Set Features Save Field: Not Supported 00:27:12.178 Reservations: Not Supported 00:27:12.178 Timestamp: Not Supported 00:27:12.178 Copy: Not Supported 00:27:12.178 Volatile Write Cache: Present 00:27:12.179 Atomic Write Unit (Normal): 1 00:27:12.179 Atomic Write Unit (PFail): 1 00:27:12.179 Atomic Compare & Write Unit: 1 00:27:12.179 Fused Compare & Write: Not Supported 00:27:12.179 Scatter-Gather List 00:27:12.179 SGL Command Set: Supported 00:27:12.179 SGL Keyed: Not Supported 00:27:12.179 SGL Bit Bucket Descriptor: Not Supported 00:27:12.179 SGL Metadata Pointer: Not Supported 00:27:12.179 Oversized SGL: Not Supported 00:27:12.179 SGL Metadata Address: Not Supported 00:27:12.179 SGL Offset: Supported 00:27:12.179 Transport SGL Data Block: Not Supported 00:27:12.179 Replay Protected Memory Block: Not Supported 00:27:12.179 00:27:12.179 Firmware Slot Information 00:27:12.179 ========================= 00:27:12.179 Active slot: 0 00:27:12.179 00:27:12.179 Asymmetric Namespace Access 00:27:12.179 =========================== 00:27:12.179 Change Count : 0 00:27:12.179 Number of ANA Group Descriptors : 1 00:27:12.179 ANA Group Descriptor : 0 00:27:12.179 ANA Group ID : 1 00:27:12.179 Number of NSID Values : 1 00:27:12.179 Change Count : 0 00:27:12.179 ANA State : 1 00:27:12.179 Namespace Identifier : 1 00:27:12.179 00:27:12.179 Commands Supported and Effects 00:27:12.179 ============================== 00:27:12.179 Admin Commands 00:27:12.179 -------------- 00:27:12.179 Get Log Page (02h): Supported 00:27:12.179 Identify (06h): Supported 00:27:12.179 Abort (08h): Supported 00:27:12.179 Set Features (09h): Supported 00:27:12.179 Get Features (0Ah): Supported 00:27:12.179 Asynchronous Event Request (0Ch): Supported 00:27:12.179 Keep Alive (18h): Supported 00:27:12.179 I/O Commands 00:27:12.179 ------------ 00:27:12.179 Flush (00h): Supported 00:27:12.179 Write (01h): Supported LBA-Change 00:27:12.179 Read (02h): Supported 00:27:12.179 Write Zeroes (08h): Supported LBA-Change 00:27:12.179 Dataset Management (09h): Supported 00:27:12.179 00:27:12.179 Error Log 00:27:12.179 ========= 00:27:12.179 Entry: 0 00:27:12.179 Error Count: 0x3 00:27:12.179 Submission Queue Id: 0x0 00:27:12.179 Command Id: 0x5 00:27:12.179 Phase Bit: 0 00:27:12.179 Status Code: 0x2 00:27:12.179 Status Code Type: 0x0 00:27:12.179 Do Not Retry: 1 00:27:12.179 Error Location: 0x28 00:27:12.179 LBA: 0x0 00:27:12.179 Namespace: 0x0 00:27:12.179 Vendor Log Page: 0x0 00:27:12.179 ----------- 00:27:12.179 Entry: 1 00:27:12.179 Error Count: 0x2 00:27:12.179 Submission Queue Id: 0x0 00:27:12.179 Command Id: 0x5 00:27:12.179 Phase Bit: 0 00:27:12.179 Status Code: 0x2 00:27:12.179 Status Code Type: 0x0 00:27:12.179 Do Not Retry: 1 00:27:12.179 Error Location: 0x28 00:27:12.179 LBA: 0x0 00:27:12.179 Namespace: 0x0 00:27:12.179 Vendor Log Page: 0x0 00:27:12.179 ----------- 00:27:12.179 Entry: 2 00:27:12.179 Error Count: 0x1 00:27:12.179 Submission Queue Id: 0x0 00:27:12.179 Command Id: 0x4 00:27:12.179 Phase Bit: 0 00:27:12.179 Status Code: 0x2 00:27:12.179 Status Code Type: 0x0 00:27:12.179 Do Not Retry: 1 00:27:12.179 Error Location: 0x28 00:27:12.179 LBA: 0x0 00:27:12.179 Namespace: 0x0 00:27:12.179 Vendor Log Page: 0x0 00:27:12.179 00:27:12.179 Number of Queues 00:27:12.179 ================ 00:27:12.179 Number of I/O Submission Queues: 128 00:27:12.179 Number of I/O Completion Queues: 128 00:27:12.179 00:27:12.179 ZNS Specific Controller Data 00:27:12.179 ============================ 00:27:12.179 Zone Append Size Limit: 0 00:27:12.179 00:27:12.179 00:27:12.179 Active Namespaces 00:27:12.179 ================= 00:27:12.179 get_feature(0x05) failed 00:27:12.179 Namespace ID:1 00:27:12.179 Command Set Identifier: NVM (00h) 00:27:12.179 Deallocate: Supported 00:27:12.179 Deallocated/Unwritten Error: Not Supported 00:27:12.179 Deallocated Read Value: Unknown 00:27:12.179 Deallocate in Write Zeroes: Not Supported 00:27:12.179 Deallocated Guard Field: 0xFFFF 00:27:12.179 Flush: Supported 00:27:12.179 Reservation: Not Supported 00:27:12.179 Namespace Sharing Capabilities: Multiple Controllers 00:27:12.179 Size (in LBAs): 1310720 (5GiB) 00:27:12.179 Capacity (in LBAs): 1310720 (5GiB) 00:27:12.179 Utilization (in LBAs): 1310720 (5GiB) 00:27:12.179 UUID: 9c2f7021-54ef-4998-bbc6-8b8c34a38540 00:27:12.179 Thin Provisioning: Not Supported 00:27:12.179 Per-NS Atomic Units: Yes 00:27:12.179 Atomic Boundary Size (Normal): 0 00:27:12.179 Atomic Boundary Size (PFail): 0 00:27:12.179 Atomic Boundary Offset: 0 00:27:12.179 NGUID/EUI64 Never Reused: No 00:27:12.179 ANA group ID: 1 00:27:12.179 Namespace Write Protected: No 00:27:12.179 Number of LBA Formats: 1 00:27:12.179 Current LBA Format: LBA Format #00 00:27:12.179 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:27:12.179 00:27:12.179 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:27:12.179 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:12.179 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:27:12.179 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:12.179 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:27:12.179 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:12.179 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:12.179 rmmod nvme_tcp 00:27:12.179 rmmod nvme_fabrics 00:27:12.179 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:12.179 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:27:12.179 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:27:12.179 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:27:12.179 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:12.179 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:12.179 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:12.179 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:27:12.180 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:12.180 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:27:12.180 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:27:12.180 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:12.180 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:27:12.180 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:27:12.440 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:27:12.440 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:27:12.440 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:27:12.440 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:27:12.440 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:27:12.440 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:27:12.440 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:27:12.440 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:27:12.440 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:27:12.441 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:27:12.441 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:12.441 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:12.441 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:27:12.441 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:12.441 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:12.441 14:31:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:12.441 14:31:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:27:12.441 14:31:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:27:12.441 14:31:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:12.441 14:31:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:27:12.441 14:31:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:12.441 14:31:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:12.441 14:31:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:12.441 14:31:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:12.441 14:31:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:27:12.441 14:31:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:27:12.701 14:31:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:27:13.273 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:13.273 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:27:13.273 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:27:13.273 00:27:13.273 real 0m2.950s 00:27:13.273 user 0m0.945s 00:27:13.273 sys 0m1.195s 00:27:13.273 ************************************ 00:27:13.273 END TEST nvmf_identify_kernel_target 00:27:13.273 ************************************ 00:27:13.273 14:31:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:13.273 14:31:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:13.273 14:31:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:13.273 14:31:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:27:13.273 14:31:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:13.273 14:31:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.273 ************************************ 00:27:13.273 START TEST nvmf_auth_host 00:27:13.273 ************************************ 00:27:13.273 14:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:13.535 * Looking for test storage... 00:27:13.535 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:27:13.535 14:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:13.535 14:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:13.535 14:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lcov --version 00:27:13.535 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:13.535 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:13.535 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:13.535 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:13.535 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:27:13.535 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:27:13.535 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:27:13.535 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:27:13.535 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:27:13.535 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:27:13.535 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:27:13.535 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:13.535 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:27:13.535 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:27:13.535 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:13.535 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:13.535 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:27:13.535 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:27:13.535 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:13.535 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:27:13.535 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:27:13.535 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:27:13.535 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:27:13.535 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:13.535 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:27:13.535 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:27:13.535 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:13.535 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:13.535 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:27:13.535 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:13.535 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:13.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:13.535 --rc genhtml_branch_coverage=1 00:27:13.535 --rc genhtml_function_coverage=1 00:27:13.535 --rc genhtml_legend=1 00:27:13.535 --rc geninfo_all_blocks=1 00:27:13.535 --rc geninfo_unexecuted_blocks=1 00:27:13.535 00:27:13.535 ' 00:27:13.535 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:13.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:13.535 --rc genhtml_branch_coverage=1 00:27:13.535 --rc genhtml_function_coverage=1 00:27:13.535 --rc genhtml_legend=1 00:27:13.535 --rc geninfo_all_blocks=1 00:27:13.535 --rc geninfo_unexecuted_blocks=1 00:27:13.535 00:27:13.535 ' 00:27:13.535 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:13.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:13.535 --rc genhtml_branch_coverage=1 00:27:13.535 --rc genhtml_function_coverage=1 00:27:13.535 --rc genhtml_legend=1 00:27:13.535 --rc geninfo_all_blocks=1 00:27:13.535 --rc geninfo_unexecuted_blocks=1 00:27:13.535 00:27:13.535 ' 00:27:13.535 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:13.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:13.535 --rc genhtml_branch_coverage=1 00:27:13.535 --rc genhtml_function_coverage=1 00:27:13.535 --rc genhtml_legend=1 00:27:13.535 --rc geninfo_all_blocks=1 00:27:13.535 --rc geninfo_unexecuted_blocks=1 00:27:13.535 00:27:13.535 ' 00:27:13.535 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:13.535 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:27:13.535 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:13.535 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:13.535 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:13.535 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:13.535 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:13.535 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:13.535 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:13.535 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:13.535 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:13.535 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:13.535 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:27:13.535 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:27:13.535 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:13.535 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:13.535 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:13.535 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:13.535 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:13.535 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:27:13.535 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:13.535 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:13.535 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:13.535 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:13.535 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:13.535 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:13.536 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:27:13.536 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:13.536 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:27:13.536 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:13.536 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:13.536 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:13.536 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:13.536 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:13.536 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:13.536 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:13.536 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:13.536 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:13.536 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:13.536 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:27:13.536 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:27:13.536 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:27:13.536 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:27:13.536 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:13.536 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:13.536 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:27:13.536 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:27:13.536 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:27:13.536 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:13.536 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:13.536 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:13.536 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:13.536 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:13.536 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:13.536 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:13.536 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:13.536 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:27:13.536 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:27:13.536 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:27:13.536 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:27:13.536 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:27:13.536 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:27:13.536 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:13.536 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:27:13.536 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:27:13.536 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:27:13.536 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:13.536 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:27:13.536 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:13.536 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:27:13.536 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:13.536 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:27:13.536 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:13.536 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:13.536 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:13.536 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:13.536 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:13.536 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:13.536 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:27:13.536 Cannot find device "nvmf_init_br" 00:27:13.536 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:27:13.536 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:27:13.536 Cannot find device "nvmf_init_br2" 00:27:13.536 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:27:13.536 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:27:13.536 Cannot find device "nvmf_tgt_br" 00:27:13.536 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:27:13.536 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:27:13.536 Cannot find device "nvmf_tgt_br2" 00:27:13.536 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:27:13.536 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:27:13.536 Cannot find device "nvmf_init_br" 00:27:13.536 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:27:13.536 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:27:13.536 Cannot find device "nvmf_init_br2" 00:27:13.536 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:27:13.536 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:27:13.536 Cannot find device "nvmf_tgt_br" 00:27:13.536 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:27:13.536 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:27:13.536 Cannot find device "nvmf_tgt_br2" 00:27:13.536 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:27:13.536 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:27:13.536 Cannot find device "nvmf_br" 00:27:13.536 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:27:13.536 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:27:13.536 Cannot find device "nvmf_init_if" 00:27:13.536 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:27:13.536 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:27:13.797 Cannot find device "nvmf_init_if2" 00:27:13.797 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:27:13.797 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:13.797 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:13.797 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:27:13.797 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:13.797 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:13.797 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:27:13.797 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:27:13.797 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:13.797 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:27:13.797 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:13.797 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:13.797 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:13.797 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:13.797 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:13.797 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:27:13.797 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:27:13.797 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:27:13.797 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:27:13.797 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:27:13.797 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:27:13.797 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:27:13.797 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:27:13.797 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:27:13.797 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:13.797 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:13.797 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:13.797 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:27:13.797 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:27:13.797 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:27:13.797 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:27:13.797 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:13.797 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:13.797 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:13.797 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:27:13.797 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:27:13.797 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:27:13.797 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:13.797 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:27:13.797 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:27:13.797 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:13.797 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:27:13.797 00:27:13.797 --- 10.0.0.3 ping statistics --- 00:27:13.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:13.798 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:27:13.798 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:27:13.798 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:27:13.798 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.028 ms 00:27:13.798 00:27:13.798 --- 10.0.0.4 ping statistics --- 00:27:13.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:13.798 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:27:13.798 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:13.798 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:13.798 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:27:13.798 00:27:13.798 --- 10.0.0.1 ping statistics --- 00:27:13.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:13.798 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:27:13.798 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:27:13.798 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:13.798 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:27:13.798 00:27:13.798 --- 10.0.0.2 ping statistics --- 00:27:13.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:13.798 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:27:13.798 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:13.798 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # return 0 00:27:13.798 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:13.798 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:13.798 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:13.798 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:13.798 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:13.798 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:13.798 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:13.798 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:27:13.798 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:13.798 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:13.798 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.798 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=101241 00:27:13.798 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 101241 00:27:13.798 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 101241 ']' 00:27:13.798 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:13.798 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:13.798 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:13.798 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:13.798 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.798 14:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:27:14.830 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:14.830 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:27:14.830 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:14.830 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:14.830 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.830 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:14.830 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:27:14.830 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:27:14.830 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:14.830 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:14.830 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:14.830 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:14.830 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:14.830 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:14.830 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0db7fb3c82d3df0b19f01476dbf527ad 00:27:14.830 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:14.830 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.dmE 00:27:14.830 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0db7fb3c82d3df0b19f01476dbf527ad 0 00:27:14.830 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0db7fb3c82d3df0b19f01476dbf527ad 0 00:27:14.830 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:14.830 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:14.830 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0db7fb3c82d3df0b19f01476dbf527ad 00:27:14.830 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:14.830 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:14.830 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.dmE 00:27:14.830 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.dmE 00:27:14.830 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.dmE 00:27:14.830 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:27:14.830 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:14.830 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:14.830 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:14.830 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:27:14.830 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:27:14.830 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:14.830 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e3c58611045cf9eadc6b656734beaa5a51d11a103dd7ce91c1dfe06f87823794 00:27:14.830 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:27:14.830 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.MK9 00:27:14.830 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e3c58611045cf9eadc6b656734beaa5a51d11a103dd7ce91c1dfe06f87823794 3 00:27:14.830 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e3c58611045cf9eadc6b656734beaa5a51d11a103dd7ce91c1dfe06f87823794 3 00:27:14.830 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:14.830 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:14.830 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e3c58611045cf9eadc6b656734beaa5a51d11a103dd7ce91c1dfe06f87823794 00:27:14.830 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:27:14.830 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:15.092 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.MK9 00:27:15.092 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.MK9 00:27:15.092 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.MK9 00:27:15.092 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:27:15.092 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:15.092 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:15.092 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:15.092 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:15.092 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:15.092 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:15.092 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=fb4e04edc79591f5e28377dfb259d5909b3906926df59d21 00:27:15.092 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:15.092 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.ZNL 00:27:15.092 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key fb4e04edc79591f5e28377dfb259d5909b3906926df59d21 0 00:27:15.092 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 fb4e04edc79591f5e28377dfb259d5909b3906926df59d21 0 00:27:15.092 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:15.092 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:15.092 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=fb4e04edc79591f5e28377dfb259d5909b3906926df59d21 00:27:15.092 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:15.092 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:15.092 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.ZNL 00:27:15.092 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.ZNL 00:27:15.092 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.ZNL 00:27:15.092 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:27:15.092 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=9f1f5640de449b552478ac6a16289ba874cf1fcb0cbdeda2 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.yCl 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 9f1f5640de449b552478ac6a16289ba874cf1fcb0cbdeda2 2 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 9f1f5640de449b552478ac6a16289ba874cf1fcb0cbdeda2 2 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=9f1f5640de449b552478ac6a16289ba874cf1fcb0cbdeda2 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.yCl 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.yCl 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.yCl 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=5951d9ca717dde4980ddc4c5fee62d8b 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.5ym 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 5951d9ca717dde4980ddc4c5fee62d8b 1 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 5951d9ca717dde4980ddc4c5fee62d8b 1 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=5951d9ca717dde4980ddc4c5fee62d8b 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.5ym 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.5ym 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.5ym 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=fd2a91285714e5e6140303257cb5436b 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.iwG 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key fd2a91285714e5e6140303257cb5436b 1 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 fd2a91285714e5e6140303257cb5436b 1 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=fd2a91285714e5e6140303257cb5436b 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.iwG 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.iwG 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.iwG 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=628fc18d6204bb6b2651478271a73fbfba60c1d13708f924 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.2By 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 628fc18d6204bb6b2651478271a73fbfba60c1d13708f924 2 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 628fc18d6204bb6b2651478271a73fbfba60c1d13708f924 2 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=628fc18d6204bb6b2651478271a73fbfba60c1d13708f924 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.2By 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.2By 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.2By 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:27:15.093 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:15.353 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:15.353 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:15.353 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:15.353 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:15.353 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:15.353 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=475c60ad49ae56c68a52efac2ddf1434 00:27:15.353 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:15.353 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.OZh 00:27:15.353 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 475c60ad49ae56c68a52efac2ddf1434 0 00:27:15.353 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 475c60ad49ae56c68a52efac2ddf1434 0 00:27:15.353 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:15.353 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:15.353 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=475c60ad49ae56c68a52efac2ddf1434 00:27:15.353 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:15.353 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:15.354 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.OZh 00:27:15.354 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.OZh 00:27:15.354 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.OZh 00:27:15.354 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:27:15.354 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:15.354 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:15.354 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:15.354 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:27:15.354 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:27:15.354 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:15.354 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=1d0e0f46d1d9599f20ba2bad604ab51d10bf558ed5a71447536a8eb7bf6b8f57 00:27:15.354 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:27:15.354 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.De3 00:27:15.354 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 1d0e0f46d1d9599f20ba2bad604ab51d10bf558ed5a71447536a8eb7bf6b8f57 3 00:27:15.354 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 1d0e0f46d1d9599f20ba2bad604ab51d10bf558ed5a71447536a8eb7bf6b8f57 3 00:27:15.354 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:15.354 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:15.354 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=1d0e0f46d1d9599f20ba2bad604ab51d10bf558ed5a71447536a8eb7bf6b8f57 00:27:15.354 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:27:15.354 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:15.354 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.De3 00:27:15.354 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.De3 00:27:15.354 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.De3 00:27:15.354 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:27:15.354 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 101241 00:27:15.354 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 101241 ']' 00:27:15.354 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:15.354 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:15.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:15.354 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:15.354 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:15.354 14:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.615 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:15.615 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:27:15.615 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:15.615 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.dmE 00:27:15.615 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.615 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.615 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.615 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.MK9 ]] 00:27:15.615 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.MK9 00:27:15.615 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.615 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.615 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.615 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:15.615 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.ZNL 00:27:15.615 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.615 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.615 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.615 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.yCl ]] 00:27:15.615 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.yCl 00:27:15.615 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.615 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.615 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.615 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:15.615 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.5ym 00:27:15.615 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.615 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.615 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.615 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.iwG ]] 00:27:15.615 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.iwG 00:27:15.615 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.615 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.615 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.615 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:15.615 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.2By 00:27:15.615 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.615 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.615 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.615 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.OZh ]] 00:27:15.615 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.OZh 00:27:15.615 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.615 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.615 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.615 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:15.615 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.De3 00:27:15.615 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.615 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.615 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.615 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:27:15.615 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:27:15.616 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:27:15.616 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:15.616 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:15.616 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:15.616 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.616 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.616 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:15.616 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.616 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:15.616 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:15.616 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:15.616 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:27:15.616 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:27:15.616 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:27:15.616 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:15.616 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:15.616 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:15.616 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:27:15.616 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:27:15.616 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:27:15.616 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:15.616 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:15.875 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:15.875 Waiting for block devices as requested 00:27:15.875 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:27:16.135 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:27:16.396 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:27:16.396 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:16.396 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:27:16.396 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:27:16.396 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:16.396 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:27:16.396 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:27:16.396 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:27:16.396 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:27:16.396 No valid GPT data, bailing 00:27:16.396 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:16.396 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:27:16.396 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:27:16.396 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:27:16.396 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:27:16.396 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:27:16.396 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:27:16.396 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:27:16.396 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:27:16.396 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:27:16.396 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:27:16.396 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:27:16.396 14:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:27:16.396 No valid GPT data, bailing 00:27:16.396 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:27:16.658 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:27:16.658 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:27:16.658 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:27:16.658 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:27:16.658 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:27:16.658 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:27:16.658 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:27:16.658 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:27:16.658 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:27:16.658 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:27:16.658 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:27:16.658 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:27:16.658 No valid GPT data, bailing 00:27:16.658 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:27:16.658 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:27:16.658 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:27:16.658 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:27:16.658 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:27:16.658 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:27:16.658 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:27:16.658 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:27:16.658 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:27:16.658 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:27:16.658 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:27:16.658 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:27:16.658 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:27:16.658 No valid GPT data, bailing 00:27:16.658 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:27:16.658 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:27:16.658 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:27:16.658 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:27:16.658 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:27:16.658 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:16.658 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:16.658 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:16.658 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:27:16.658 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:27:16.658 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:27:16.658 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:27:16.658 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:27:16.658 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:27:16.658 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:27:16.658 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:27:16.658 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:16.658 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid=9ca3beaa-4765-4780-96a0-c04fd97e7435 -a 10.0.0.1 -t tcp -s 4420 00:27:16.658 00:27:16.658 Discovery Log Number of Records 2, Generation counter 2 00:27:16.658 =====Discovery Log Entry 0====== 00:27:16.658 trtype: tcp 00:27:16.658 adrfam: ipv4 00:27:16.658 subtype: current discovery subsystem 00:27:16.658 treq: not specified, sq flow control disable supported 00:27:16.658 portid: 1 00:27:16.658 trsvcid: 4420 00:27:16.658 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:16.658 traddr: 10.0.0.1 00:27:16.658 eflags: none 00:27:16.658 sectype: none 00:27:16.658 =====Discovery Log Entry 1====== 00:27:16.658 trtype: tcp 00:27:16.658 adrfam: ipv4 00:27:16.658 subtype: nvme subsystem 00:27:16.658 treq: not specified, sq flow control disable supported 00:27:16.658 portid: 1 00:27:16.658 trsvcid: 4420 00:27:16.658 subnqn: nqn.2024-02.io.spdk:cnode0 00:27:16.658 traddr: 10.0.0.1 00:27:16.658 eflags: none 00:27:16.658 sectype: none 00:27:16.658 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:16.658 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:27:16.658 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:16.658 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:16.658 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:16.658 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:16.658 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:16.658 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:16.658 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmI0ZTA0ZWRjNzk1OTFmNWUyODM3N2RmYjI1OWQ1OTA5YjM5MDY5MjZkZjU5ZDIxJ0OKSQ==: 00:27:16.658 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWYxZjU2NDBkZTQ0OWI1NTI0NzhhYzZhMTYyODliYTg3NGNmMWZjYjBjYmRlZGEyXvuPMw==: 00:27:16.658 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:16.658 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:16.919 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmI0ZTA0ZWRjNzk1OTFmNWUyODM3N2RmYjI1OWQ1OTA5YjM5MDY5MjZkZjU5ZDIxJ0OKSQ==: 00:27:16.919 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWYxZjU2NDBkZTQ0OWI1NTI0NzhhYzZhMTYyODliYTg3NGNmMWZjYjBjYmRlZGEyXvuPMw==: ]] 00:27:16.919 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWYxZjU2NDBkZTQ0OWI1NTI0NzhhYzZhMTYyODliYTg3NGNmMWZjYjBjYmRlZGEyXvuPMw==: 00:27:16.919 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:16.919 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:27:16.919 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:16.919 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:16.919 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:27:16.919 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:16.919 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:27:16.919 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:16.919 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:16.919 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:16.919 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:16.919 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.919 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.919 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.919 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:16.919 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:16.919 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:16.919 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:16.919 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.919 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.919 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:16.919 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.919 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:16.919 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:16.919 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:16.919 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:16.919 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.919 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.919 nvme0n1 00:27:16.919 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.919 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:16.919 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.919 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.919 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:16.919 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.919 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:16.919 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:16.919 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.919 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.919 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.919 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:16.919 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:16.919 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:16.919 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:27:16.919 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:16.919 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:16.919 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:16.919 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:16.919 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGRiN2ZiM2M4MmQzZGYwYjE5ZjAxNDc2ZGJmNTI3YWSiEjn5: 00:27:16.919 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTNjNTg2MTEwNDVjZjllYWRjNmI2NTY3MzRiZWFhNWE1MWQxMWExMDNkZDdjZTkxYzFkZmUwNmY4NzgyMzc5NFsGusY=: 00:27:16.919 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:16.919 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:16.919 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGRiN2ZiM2M4MmQzZGYwYjE5ZjAxNDc2ZGJmNTI3YWSiEjn5: 00:27:16.919 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTNjNTg2MTEwNDVjZjllYWRjNmI2NTY3MzRiZWFhNWE1MWQxMWExMDNkZDdjZTkxYzFkZmUwNmY4NzgyMzc5NFsGusY=: ]] 00:27:16.919 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTNjNTg2MTEwNDVjZjllYWRjNmI2NTY3MzRiZWFhNWE1MWQxMWExMDNkZDdjZTkxYzFkZmUwNmY4NzgyMzc5NFsGusY=: 00:27:16.919 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:27:16.919 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:16.919 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:16.919 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:16.919 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:16.919 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:16.919 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:16.919 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.919 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.919 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.919 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:16.919 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:16.919 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:16.919 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:16.919 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.919 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.919 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:16.919 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.919 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:16.920 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:16.920 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:16.920 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:16.920 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.920 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.180 nvme0n1 00:27:17.180 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.180 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.181 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.181 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.181 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:17.181 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.181 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:17.181 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:17.181 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.181 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.181 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.181 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:17.181 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:17.181 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.181 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:17.181 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:17.181 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:17.181 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmI0ZTA0ZWRjNzk1OTFmNWUyODM3N2RmYjI1OWQ1OTA5YjM5MDY5MjZkZjU5ZDIxJ0OKSQ==: 00:27:17.181 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWYxZjU2NDBkZTQ0OWI1NTI0NzhhYzZhMTYyODliYTg3NGNmMWZjYjBjYmRlZGEyXvuPMw==: 00:27:17.181 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:17.181 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:17.181 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmI0ZTA0ZWRjNzk1OTFmNWUyODM3N2RmYjI1OWQ1OTA5YjM5MDY5MjZkZjU5ZDIxJ0OKSQ==: 00:27:17.181 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWYxZjU2NDBkZTQ0OWI1NTI0NzhhYzZhMTYyODliYTg3NGNmMWZjYjBjYmRlZGEyXvuPMw==: ]] 00:27:17.181 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWYxZjU2NDBkZTQ0OWI1NTI0NzhhYzZhMTYyODliYTg3NGNmMWZjYjBjYmRlZGEyXvuPMw==: 00:27:17.181 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:27:17.181 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:17.181 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:17.181 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:17.181 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:17.181 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:17.181 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:17.181 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.181 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.181 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.181 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:17.181 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:17.181 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:17.181 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:17.181 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.181 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.181 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:17.181 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.181 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:17.181 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:17.181 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:17.181 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:17.181 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.181 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.443 nvme0n1 00:27:17.443 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.443 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.443 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.443 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.443 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:17.443 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.443 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:17.443 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:17.443 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.443 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.443 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.443 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:17.443 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:17.443 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.443 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:17.443 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:17.443 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:17.443 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTk1MWQ5Y2E3MTdkZGU0OTgwZGRjNGM1ZmVlNjJkOGKqN+LY: 00:27:17.443 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmQyYTkxMjg1NzE0ZTVlNjE0MDMwMzI1N2NiNTQzNmKymp7m: 00:27:17.443 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:17.443 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:17.443 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTk1MWQ5Y2E3MTdkZGU0OTgwZGRjNGM1ZmVlNjJkOGKqN+LY: 00:27:17.443 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmQyYTkxMjg1NzE0ZTVlNjE0MDMwMzI1N2NiNTQzNmKymp7m: ]] 00:27:17.443 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmQyYTkxMjg1NzE0ZTVlNjE0MDMwMzI1N2NiNTQzNmKymp7m: 00:27:17.443 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:27:17.443 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:17.443 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:17.443 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:17.443 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:17.443 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:17.443 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:17.443 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.443 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.443 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.443 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:17.443 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:17.443 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:17.443 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:17.443 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.443 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.443 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:17.443 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.443 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:17.443 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:17.443 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:17.443 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:17.443 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.443 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.443 nvme0n1 00:27:17.443 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.443 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.444 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:17.444 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.444 14:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.444 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.444 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:17.444 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:17.444 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.444 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.444 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.444 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:17.444 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:27:17.444 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.444 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:17.444 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:17.444 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:17.444 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjI4ZmMxOGQ2MjA0YmI2YjI2NTE0NzgyNzFhNzNmYmZiYTYwYzFkMTM3MDhmOTI0UJyA4A==: 00:27:17.444 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDc1YzYwYWQ0OWFlNTZjNjhhNTJlZmFjMmRkZjE0MzRFixCB: 00:27:17.444 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:17.444 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:17.444 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjI4ZmMxOGQ2MjA0YmI2YjI2NTE0NzgyNzFhNzNmYmZiYTYwYzFkMTM3MDhmOTI0UJyA4A==: 00:27:17.444 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDc1YzYwYWQ0OWFlNTZjNjhhNTJlZmFjMmRkZjE0MzRFixCB: ]] 00:27:17.444 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDc1YzYwYWQ0OWFlNTZjNjhhNTJlZmFjMmRkZjE0MzRFixCB: 00:27:17.444 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:27:17.444 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:17.444 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:17.444 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:17.444 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:17.444 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:17.444 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:17.444 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.444 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.444 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.444 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:17.444 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:17.444 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:17.444 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:17.444 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.444 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.444 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:17.444 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.444 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:17.444 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:17.444 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:17.444 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:17.444 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.444 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.706 nvme0n1 00:27:17.706 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.706 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:17.706 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.706 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.706 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.706 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.706 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:17.706 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:17.706 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.706 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.706 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.706 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:17.706 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:27:17.706 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.706 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:17.706 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:17.706 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:17.706 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWQwZTBmNDZkMWQ5NTk5ZjIwYmEyYmFkNjA0YWI1MWQxMGJmNTU4ZWQ1YTcxNDQ3NTM2YThlYjdiZjZiOGY1N5VSzg4=: 00:27:17.706 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:17.706 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:17.706 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:17.706 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWQwZTBmNDZkMWQ5NTk5ZjIwYmEyYmFkNjA0YWI1MWQxMGJmNTU4ZWQ1YTcxNDQ3NTM2YThlYjdiZjZiOGY1N5VSzg4=: 00:27:17.706 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:17.706 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:27:17.706 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:17.706 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:17.706 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:17.706 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:17.706 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:17.706 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:17.706 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.706 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.706 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.706 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:17.706 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:17.706 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:17.706 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:17.706 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.706 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.706 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:17.706 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.706 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:17.706 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:17.706 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:17.706 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:17.706 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.706 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.967 nvme0n1 00:27:17.967 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.967 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:17.967 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.967 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.967 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.967 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.967 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:17.967 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:17.967 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.967 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.967 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.967 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:17.967 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:17.967 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:27:17.967 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.967 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:17.967 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:17.967 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:17.967 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGRiN2ZiM2M4MmQzZGYwYjE5ZjAxNDc2ZGJmNTI3YWSiEjn5: 00:27:17.967 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTNjNTg2MTEwNDVjZjllYWRjNmI2NTY3MzRiZWFhNWE1MWQxMWExMDNkZDdjZTkxYzFkZmUwNmY4NzgyMzc5NFsGusY=: 00:27:17.967 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:17.967 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:18.229 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGRiN2ZiM2M4MmQzZGYwYjE5ZjAxNDc2ZGJmNTI3YWSiEjn5: 00:27:18.229 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTNjNTg2MTEwNDVjZjllYWRjNmI2NTY3MzRiZWFhNWE1MWQxMWExMDNkZDdjZTkxYzFkZmUwNmY4NzgyMzc5NFsGusY=: ]] 00:27:18.229 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTNjNTg2MTEwNDVjZjllYWRjNmI2NTY3MzRiZWFhNWE1MWQxMWExMDNkZDdjZTkxYzFkZmUwNmY4NzgyMzc5NFsGusY=: 00:27:18.229 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:27:18.229 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.229 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:18.229 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:18.229 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:18.229 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.229 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:18.229 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.229 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.229 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.229 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.229 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:18.229 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:18.229 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:18.229 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.229 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.229 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:18.229 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.229 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:18.229 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:18.229 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:18.229 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:18.229 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.229 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.229 nvme0n1 00:27:18.229 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.229 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.229 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.229 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.229 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.229 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.229 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.229 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.229 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.229 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.490 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.490 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.490 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:27:18.490 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.490 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:18.490 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:18.490 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:18.490 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmI0ZTA0ZWRjNzk1OTFmNWUyODM3N2RmYjI1OWQ1OTA5YjM5MDY5MjZkZjU5ZDIxJ0OKSQ==: 00:27:18.490 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWYxZjU2NDBkZTQ0OWI1NTI0NzhhYzZhMTYyODliYTg3NGNmMWZjYjBjYmRlZGEyXvuPMw==: 00:27:18.490 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:18.490 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:18.490 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmI0ZTA0ZWRjNzk1OTFmNWUyODM3N2RmYjI1OWQ1OTA5YjM5MDY5MjZkZjU5ZDIxJ0OKSQ==: 00:27:18.490 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWYxZjU2NDBkZTQ0OWI1NTI0NzhhYzZhMTYyODliYTg3NGNmMWZjYjBjYmRlZGEyXvuPMw==: ]] 00:27:18.490 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWYxZjU2NDBkZTQ0OWI1NTI0NzhhYzZhMTYyODliYTg3NGNmMWZjYjBjYmRlZGEyXvuPMw==: 00:27:18.490 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:27:18.490 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.490 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:18.490 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:18.490 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:18.491 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.491 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:18.491 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.491 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.491 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.491 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.491 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:18.491 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:18.491 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:18.491 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.491 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.491 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:18.491 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.491 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:18.491 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:18.491 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:18.491 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:18.491 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.491 14:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.491 nvme0n1 00:27:18.491 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.491 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.491 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.491 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.491 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.491 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.491 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.491 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.491 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.491 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.491 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.491 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.491 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:27:18.491 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.491 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:18.491 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:18.491 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:18.491 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTk1MWQ5Y2E3MTdkZGU0OTgwZGRjNGM1ZmVlNjJkOGKqN+LY: 00:27:18.491 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmQyYTkxMjg1NzE0ZTVlNjE0MDMwMzI1N2NiNTQzNmKymp7m: 00:27:18.491 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:18.491 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:18.491 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTk1MWQ5Y2E3MTdkZGU0OTgwZGRjNGM1ZmVlNjJkOGKqN+LY: 00:27:18.491 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmQyYTkxMjg1NzE0ZTVlNjE0MDMwMzI1N2NiNTQzNmKymp7m: ]] 00:27:18.491 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmQyYTkxMjg1NzE0ZTVlNjE0MDMwMzI1N2NiNTQzNmKymp7m: 00:27:18.491 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:27:18.491 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.491 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:18.491 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:18.491 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:18.491 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.491 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:18.491 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.491 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.491 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.491 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.491 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:18.491 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:18.491 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:18.491 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.491 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.491 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:18.491 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.491 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:18.491 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:18.491 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:18.491 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:18.491 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.491 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.753 nvme0n1 00:27:18.753 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.753 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.753 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.753 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.753 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.753 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.753 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.753 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.753 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.753 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.753 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.753 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.753 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:27:18.753 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.753 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:18.753 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:18.753 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:18.753 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjI4ZmMxOGQ2MjA0YmI2YjI2NTE0NzgyNzFhNzNmYmZiYTYwYzFkMTM3MDhmOTI0UJyA4A==: 00:27:18.753 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDc1YzYwYWQ0OWFlNTZjNjhhNTJlZmFjMmRkZjE0MzRFixCB: 00:27:18.753 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:18.753 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:18.753 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjI4ZmMxOGQ2MjA0YmI2YjI2NTE0NzgyNzFhNzNmYmZiYTYwYzFkMTM3MDhmOTI0UJyA4A==: 00:27:18.753 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDc1YzYwYWQ0OWFlNTZjNjhhNTJlZmFjMmRkZjE0MzRFixCB: ]] 00:27:18.753 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDc1YzYwYWQ0OWFlNTZjNjhhNTJlZmFjMmRkZjE0MzRFixCB: 00:27:18.753 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:27:18.753 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.753 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:18.753 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:18.753 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:18.753 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.753 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:18.753 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.753 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.753 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.753 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.753 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:18.753 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:18.753 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:18.753 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.753 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.753 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:18.753 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.753 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:18.753 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:18.753 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:18.753 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:18.753 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.753 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.015 nvme0n1 00:27:19.015 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.015 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.015 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.015 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.015 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.015 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.015 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.015 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.015 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.015 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.015 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.015 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.015 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:27:19.015 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.015 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:19.015 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:19.015 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:19.015 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWQwZTBmNDZkMWQ5NTk5ZjIwYmEyYmFkNjA0YWI1MWQxMGJmNTU4ZWQ1YTcxNDQ3NTM2YThlYjdiZjZiOGY1N5VSzg4=: 00:27:19.015 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:19.015 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:19.015 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:19.015 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWQwZTBmNDZkMWQ5NTk5ZjIwYmEyYmFkNjA0YWI1MWQxMGJmNTU4ZWQ1YTcxNDQ3NTM2YThlYjdiZjZiOGY1N5VSzg4=: 00:27:19.015 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:19.015 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:27:19.015 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.015 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:19.015 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:19.015 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:19.015 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.015 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:19.015 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.015 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.015 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.015 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.015 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:19.015 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:19.015 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:19.015 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.015 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.015 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:19.015 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.015 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:19.015 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:19.015 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:19.015 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:19.015 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.015 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.276 nvme0n1 00:27:19.276 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.276 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.276 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.276 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.276 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.276 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.276 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.276 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.276 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.276 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.276 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.276 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:19.276 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.276 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:27:19.276 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.276 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:19.276 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:19.276 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:19.276 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGRiN2ZiM2M4MmQzZGYwYjE5ZjAxNDc2ZGJmNTI3YWSiEjn5: 00:27:19.276 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTNjNTg2MTEwNDVjZjllYWRjNmI2NTY3MzRiZWFhNWE1MWQxMWExMDNkZDdjZTkxYzFkZmUwNmY4NzgyMzc5NFsGusY=: 00:27:19.276 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:19.276 14:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:19.847 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGRiN2ZiM2M4MmQzZGYwYjE5ZjAxNDc2ZGJmNTI3YWSiEjn5: 00:27:19.847 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTNjNTg2MTEwNDVjZjllYWRjNmI2NTY3MzRiZWFhNWE1MWQxMWExMDNkZDdjZTkxYzFkZmUwNmY4NzgyMzc5NFsGusY=: ]] 00:27:19.847 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTNjNTg2MTEwNDVjZjllYWRjNmI2NTY3MzRiZWFhNWE1MWQxMWExMDNkZDdjZTkxYzFkZmUwNmY4NzgyMzc5NFsGusY=: 00:27:19.847 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:27:19.847 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.847 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:19.847 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:19.847 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:19.847 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.847 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:19.847 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.847 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.847 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.847 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.847 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:19.847 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:19.847 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:19.847 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.847 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.847 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:19.847 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.847 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:19.847 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:19.847 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:19.847 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:19.847 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.847 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.109 nvme0n1 00:27:20.109 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.109 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.109 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.109 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.109 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.109 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.109 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.109 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.109 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.109 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.109 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.109 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.109 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:27:20.109 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.109 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:20.109 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:20.109 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:20.109 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmI0ZTA0ZWRjNzk1OTFmNWUyODM3N2RmYjI1OWQ1OTA5YjM5MDY5MjZkZjU5ZDIxJ0OKSQ==: 00:27:20.109 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWYxZjU2NDBkZTQ0OWI1NTI0NzhhYzZhMTYyODliYTg3NGNmMWZjYjBjYmRlZGEyXvuPMw==: 00:27:20.109 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:20.109 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:20.109 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmI0ZTA0ZWRjNzk1OTFmNWUyODM3N2RmYjI1OWQ1OTA5YjM5MDY5MjZkZjU5ZDIxJ0OKSQ==: 00:27:20.109 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWYxZjU2NDBkZTQ0OWI1NTI0NzhhYzZhMTYyODliYTg3NGNmMWZjYjBjYmRlZGEyXvuPMw==: ]] 00:27:20.109 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWYxZjU2NDBkZTQ0OWI1NTI0NzhhYzZhMTYyODliYTg3NGNmMWZjYjBjYmRlZGEyXvuPMw==: 00:27:20.109 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:27:20.109 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.109 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:20.109 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:20.109 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:20.109 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.109 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:20.109 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.109 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.109 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.109 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.109 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:20.109 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:20.109 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:20.109 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.110 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.110 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:20.110 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.110 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:20.110 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:20.110 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:20.110 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:20.110 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.110 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.371 nvme0n1 00:27:20.371 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.371 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.371 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.371 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.371 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.371 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.371 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.371 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.371 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.371 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.371 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.371 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.371 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:27:20.371 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.371 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:20.371 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:20.371 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:20.371 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTk1MWQ5Y2E3MTdkZGU0OTgwZGRjNGM1ZmVlNjJkOGKqN+LY: 00:27:20.371 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmQyYTkxMjg1NzE0ZTVlNjE0MDMwMzI1N2NiNTQzNmKymp7m: 00:27:20.371 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:20.371 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:20.371 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTk1MWQ5Y2E3MTdkZGU0OTgwZGRjNGM1ZmVlNjJkOGKqN+LY: 00:27:20.371 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmQyYTkxMjg1NzE0ZTVlNjE0MDMwMzI1N2NiNTQzNmKymp7m: ]] 00:27:20.371 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmQyYTkxMjg1NzE0ZTVlNjE0MDMwMzI1N2NiNTQzNmKymp7m: 00:27:20.371 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:27:20.371 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.371 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:20.371 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:20.371 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:20.371 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.371 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:20.371 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.371 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.371 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.371 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.371 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:20.371 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:20.371 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:20.371 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.371 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.371 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:20.371 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.371 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:20.371 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:20.371 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:20.371 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:20.371 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.371 14:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.633 nvme0n1 00:27:20.633 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.633 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.633 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.633 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.633 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.633 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.633 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.633 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.633 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.633 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.633 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.633 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.633 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:27:20.633 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.633 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:20.633 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:20.633 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:20.633 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjI4ZmMxOGQ2MjA0YmI2YjI2NTE0NzgyNzFhNzNmYmZiYTYwYzFkMTM3MDhmOTI0UJyA4A==: 00:27:20.633 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDc1YzYwYWQ0OWFlNTZjNjhhNTJlZmFjMmRkZjE0MzRFixCB: 00:27:20.633 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:20.633 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:20.633 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjI4ZmMxOGQ2MjA0YmI2YjI2NTE0NzgyNzFhNzNmYmZiYTYwYzFkMTM3MDhmOTI0UJyA4A==: 00:27:20.633 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDc1YzYwYWQ0OWFlNTZjNjhhNTJlZmFjMmRkZjE0MzRFixCB: ]] 00:27:20.633 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDc1YzYwYWQ0OWFlNTZjNjhhNTJlZmFjMmRkZjE0MzRFixCB: 00:27:20.633 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:27:20.633 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.633 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:20.633 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:20.633 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:20.633 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.633 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:20.633 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.633 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.633 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.633 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.633 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:20.633 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:20.633 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:20.633 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.633 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.633 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:20.633 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.633 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:20.633 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:20.633 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:20.633 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:20.633 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.633 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.895 nvme0n1 00:27:20.895 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.895 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.895 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.895 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.895 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.895 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.895 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.895 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.895 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.895 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.895 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.895 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.895 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:27:20.895 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.895 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:20.895 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:20.895 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:20.895 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWQwZTBmNDZkMWQ5NTk5ZjIwYmEyYmFkNjA0YWI1MWQxMGJmNTU4ZWQ1YTcxNDQ3NTM2YThlYjdiZjZiOGY1N5VSzg4=: 00:27:20.895 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:20.895 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:20.895 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:20.895 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWQwZTBmNDZkMWQ5NTk5ZjIwYmEyYmFkNjA0YWI1MWQxMGJmNTU4ZWQ1YTcxNDQ3NTM2YThlYjdiZjZiOGY1N5VSzg4=: 00:27:20.895 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:20.895 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:27:20.895 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.895 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:20.895 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:20.895 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:20.896 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.896 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:20.896 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.896 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.896 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.896 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.896 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:20.896 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:20.896 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:20.896 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.896 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.896 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:20.896 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.896 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:20.896 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:20.896 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:20.896 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:20.896 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.896 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.158 nvme0n1 00:27:21.158 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.158 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.158 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.158 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.158 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.158 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.158 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.158 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.158 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.158 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.158 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.158 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:21.158 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.158 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:27:21.158 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.158 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:21.158 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:21.158 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:21.158 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGRiN2ZiM2M4MmQzZGYwYjE5ZjAxNDc2ZGJmNTI3YWSiEjn5: 00:27:21.158 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTNjNTg2MTEwNDVjZjllYWRjNmI2NTY3MzRiZWFhNWE1MWQxMWExMDNkZDdjZTkxYzFkZmUwNmY4NzgyMzc5NFsGusY=: 00:27:21.158 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:21.158 14:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:23.073 14:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGRiN2ZiM2M4MmQzZGYwYjE5ZjAxNDc2ZGJmNTI3YWSiEjn5: 00:27:23.073 14:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTNjNTg2MTEwNDVjZjllYWRjNmI2NTY3MzRiZWFhNWE1MWQxMWExMDNkZDdjZTkxYzFkZmUwNmY4NzgyMzc5NFsGusY=: ]] 00:27:23.073 14:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTNjNTg2MTEwNDVjZjllYWRjNmI2NTY3MzRiZWFhNWE1MWQxMWExMDNkZDdjZTkxYzFkZmUwNmY4NzgyMzc5NFsGusY=: 00:27:23.073 14:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:27:23.073 14:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.073 14:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:23.073 14:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:23.073 14:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:23.073 14:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.073 14:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:23.073 14:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.073 14:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.073 14:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.073 14:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.073 14:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:23.073 14:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:23.073 14:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:23.073 14:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.073 14:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.073 14:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:23.073 14:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.073 14:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:23.073 14:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:23.073 14:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:23.073 14:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:23.073 14:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.073 14:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.334 nvme0n1 00:27:23.334 14:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.334 14:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.334 14:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.334 14:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.334 14:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.334 14:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.334 14:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.334 14:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.334 14:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.334 14:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.334 14:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.334 14:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.334 14:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:27:23.334 14:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.334 14:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:23.334 14:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:23.334 14:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:23.334 14:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmI0ZTA0ZWRjNzk1OTFmNWUyODM3N2RmYjI1OWQ1OTA5YjM5MDY5MjZkZjU5ZDIxJ0OKSQ==: 00:27:23.334 14:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWYxZjU2NDBkZTQ0OWI1NTI0NzhhYzZhMTYyODliYTg3NGNmMWZjYjBjYmRlZGEyXvuPMw==: 00:27:23.334 14:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:23.334 14:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:23.335 14:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmI0ZTA0ZWRjNzk1OTFmNWUyODM3N2RmYjI1OWQ1OTA5YjM5MDY5MjZkZjU5ZDIxJ0OKSQ==: 00:27:23.335 14:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWYxZjU2NDBkZTQ0OWI1NTI0NzhhYzZhMTYyODliYTg3NGNmMWZjYjBjYmRlZGEyXvuPMw==: ]] 00:27:23.335 14:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWYxZjU2NDBkZTQ0OWI1NTI0NzhhYzZhMTYyODliYTg3NGNmMWZjYjBjYmRlZGEyXvuPMw==: 00:27:23.335 14:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:27:23.335 14:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.335 14:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:23.335 14:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:23.335 14:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:23.335 14:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.335 14:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:23.335 14:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.335 14:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.335 14:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.335 14:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.335 14:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:23.335 14:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:23.335 14:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:23.335 14:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.335 14:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.335 14:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:23.335 14:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.335 14:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:23.335 14:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:23.335 14:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:23.335 14:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:23.335 14:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.335 14:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.906 nvme0n1 00:27:23.906 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.906 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.906 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.906 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.907 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.907 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.907 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.907 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.907 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.907 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.907 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.907 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.907 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:27:23.907 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.907 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:23.907 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:23.907 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:23.907 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTk1MWQ5Y2E3MTdkZGU0OTgwZGRjNGM1ZmVlNjJkOGKqN+LY: 00:27:23.907 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmQyYTkxMjg1NzE0ZTVlNjE0MDMwMzI1N2NiNTQzNmKymp7m: 00:27:23.907 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:23.907 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:23.907 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTk1MWQ5Y2E3MTdkZGU0OTgwZGRjNGM1ZmVlNjJkOGKqN+LY: 00:27:23.907 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmQyYTkxMjg1NzE0ZTVlNjE0MDMwMzI1N2NiNTQzNmKymp7m: ]] 00:27:23.907 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmQyYTkxMjg1NzE0ZTVlNjE0MDMwMzI1N2NiNTQzNmKymp7m: 00:27:23.907 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:27:23.907 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.907 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:23.907 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:23.907 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:23.907 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.907 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:23.907 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.907 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.907 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.907 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.907 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:23.907 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:23.907 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:23.907 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.907 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.907 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:23.907 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.907 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:23.907 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:23.907 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:23.907 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:23.907 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.907 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.168 nvme0n1 00:27:24.168 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.168 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.168 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.168 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.168 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.168 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.168 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.168 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.168 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.168 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.168 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.168 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.168 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:27:24.168 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.168 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:24.168 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:24.168 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:24.168 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjI4ZmMxOGQ2MjA0YmI2YjI2NTE0NzgyNzFhNzNmYmZiYTYwYzFkMTM3MDhmOTI0UJyA4A==: 00:27:24.168 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDc1YzYwYWQ0OWFlNTZjNjhhNTJlZmFjMmRkZjE0MzRFixCB: 00:27:24.168 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:24.168 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:24.168 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjI4ZmMxOGQ2MjA0YmI2YjI2NTE0NzgyNzFhNzNmYmZiYTYwYzFkMTM3MDhmOTI0UJyA4A==: 00:27:24.168 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDc1YzYwYWQ0OWFlNTZjNjhhNTJlZmFjMmRkZjE0MzRFixCB: ]] 00:27:24.168 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDc1YzYwYWQ0OWFlNTZjNjhhNTJlZmFjMmRkZjE0MzRFixCB: 00:27:24.168 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:27:24.168 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.168 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:24.168 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:24.168 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:24.168 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.168 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:24.168 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.168 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.168 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.168 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.168 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:24.168 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:24.168 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:24.168 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.168 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.168 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:24.168 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.168 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:24.168 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:24.168 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:24.168 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:24.168 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.168 14:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.741 nvme0n1 00:27:24.741 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.741 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.741 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.741 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.741 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.741 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.741 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.741 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.741 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.741 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.741 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.741 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.741 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:27:24.741 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.741 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:24.741 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:24.741 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:24.741 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWQwZTBmNDZkMWQ5NTk5ZjIwYmEyYmFkNjA0YWI1MWQxMGJmNTU4ZWQ1YTcxNDQ3NTM2YThlYjdiZjZiOGY1N5VSzg4=: 00:27:24.741 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:24.741 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:24.741 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:24.741 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWQwZTBmNDZkMWQ5NTk5ZjIwYmEyYmFkNjA0YWI1MWQxMGJmNTU4ZWQ1YTcxNDQ3NTM2YThlYjdiZjZiOGY1N5VSzg4=: 00:27:24.741 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:24.741 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:27:24.741 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.741 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:24.741 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:24.741 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:24.741 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.741 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:24.741 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.741 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.741 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.741 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.741 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:24.741 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:24.741 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:24.741 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.741 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.741 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:24.741 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.741 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:24.741 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:24.741 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:24.741 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:24.741 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.741 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.313 nvme0n1 00:27:25.313 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.313 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.313 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.313 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.313 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.313 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.314 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.314 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.314 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.314 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.314 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.314 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:25.314 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.314 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:27:25.314 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.314 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:25.314 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:25.314 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:25.314 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGRiN2ZiM2M4MmQzZGYwYjE5ZjAxNDc2ZGJmNTI3YWSiEjn5: 00:27:25.314 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTNjNTg2MTEwNDVjZjllYWRjNmI2NTY3MzRiZWFhNWE1MWQxMWExMDNkZDdjZTkxYzFkZmUwNmY4NzgyMzc5NFsGusY=: 00:27:25.314 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:25.314 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:25.314 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGRiN2ZiM2M4MmQzZGYwYjE5ZjAxNDc2ZGJmNTI3YWSiEjn5: 00:27:25.314 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTNjNTg2MTEwNDVjZjllYWRjNmI2NTY3MzRiZWFhNWE1MWQxMWExMDNkZDdjZTkxYzFkZmUwNmY4NzgyMzc5NFsGusY=: ]] 00:27:25.314 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTNjNTg2MTEwNDVjZjllYWRjNmI2NTY3MzRiZWFhNWE1MWQxMWExMDNkZDdjZTkxYzFkZmUwNmY4NzgyMzc5NFsGusY=: 00:27:25.314 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:27:25.314 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.314 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:25.314 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:25.314 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:25.314 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.314 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:25.314 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.314 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.314 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.314 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.314 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:25.314 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:25.314 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:25.314 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.314 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.314 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:25.314 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.314 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:25.314 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:25.314 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:25.314 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:25.314 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.314 14:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.257 nvme0n1 00:27:26.257 14:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.257 14:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.257 14:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.257 14:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.257 14:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.257 14:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.257 14:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.257 14:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.257 14:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.257 14:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.257 14:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.257 14:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.257 14:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:27:26.257 14:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.257 14:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:26.257 14:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:26.257 14:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:26.257 14:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmI0ZTA0ZWRjNzk1OTFmNWUyODM3N2RmYjI1OWQ1OTA5YjM5MDY5MjZkZjU5ZDIxJ0OKSQ==: 00:27:26.257 14:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWYxZjU2NDBkZTQ0OWI1NTI0NzhhYzZhMTYyODliYTg3NGNmMWZjYjBjYmRlZGEyXvuPMw==: 00:27:26.257 14:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:26.257 14:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:26.257 14:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmI0ZTA0ZWRjNzk1OTFmNWUyODM3N2RmYjI1OWQ1OTA5YjM5MDY5MjZkZjU5ZDIxJ0OKSQ==: 00:27:26.257 14:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWYxZjU2NDBkZTQ0OWI1NTI0NzhhYzZhMTYyODliYTg3NGNmMWZjYjBjYmRlZGEyXvuPMw==: ]] 00:27:26.257 14:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWYxZjU2NDBkZTQ0OWI1NTI0NzhhYzZhMTYyODliYTg3NGNmMWZjYjBjYmRlZGEyXvuPMw==: 00:27:26.257 14:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:27:26.257 14:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.257 14:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:26.258 14:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:26.258 14:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:26.258 14:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.258 14:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:26.258 14:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.258 14:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.258 14:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.258 14:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.258 14:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:26.258 14:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:26.258 14:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:26.258 14:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.258 14:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.258 14:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:26.258 14:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.258 14:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:26.258 14:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:26.258 14:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:26.258 14:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:26.258 14:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.258 14:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.829 nvme0n1 00:27:26.829 14:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.829 14:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.829 14:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.829 14:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.829 14:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.829 14:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.829 14:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.829 14:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.829 14:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.829 14:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.829 14:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.829 14:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.829 14:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:27:26.829 14:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.829 14:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:26.829 14:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:26.829 14:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:26.829 14:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTk1MWQ5Y2E3MTdkZGU0OTgwZGRjNGM1ZmVlNjJkOGKqN+LY: 00:27:26.829 14:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmQyYTkxMjg1NzE0ZTVlNjE0MDMwMzI1N2NiNTQzNmKymp7m: 00:27:26.829 14:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:26.829 14:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:26.829 14:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTk1MWQ5Y2E3MTdkZGU0OTgwZGRjNGM1ZmVlNjJkOGKqN+LY: 00:27:26.829 14:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmQyYTkxMjg1NzE0ZTVlNjE0MDMwMzI1N2NiNTQzNmKymp7m: ]] 00:27:26.829 14:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmQyYTkxMjg1NzE0ZTVlNjE0MDMwMzI1N2NiNTQzNmKymp7m: 00:27:26.829 14:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:27:26.829 14:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.829 14:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:26.829 14:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:26.829 14:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:26.829 14:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.829 14:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:26.829 14:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.829 14:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.829 14:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.829 14:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.829 14:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:26.829 14:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:26.829 14:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:26.829 14:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.829 14:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.829 14:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:26.829 14:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.829 14:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:26.829 14:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:26.829 14:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:26.829 14:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:26.829 14:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.829 14:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.773 nvme0n1 00:27:27.773 14:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.773 14:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.773 14:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.773 14:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.773 14:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.773 14:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.773 14:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.773 14:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.773 14:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.773 14:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.773 14:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.773 14:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.773 14:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:27:27.773 14:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.773 14:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:27.773 14:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:27.773 14:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:27.773 14:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjI4ZmMxOGQ2MjA0YmI2YjI2NTE0NzgyNzFhNzNmYmZiYTYwYzFkMTM3MDhmOTI0UJyA4A==: 00:27:27.773 14:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDc1YzYwYWQ0OWFlNTZjNjhhNTJlZmFjMmRkZjE0MzRFixCB: 00:27:27.773 14:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:27.773 14:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:27.773 14:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjI4ZmMxOGQ2MjA0YmI2YjI2NTE0NzgyNzFhNzNmYmZiYTYwYzFkMTM3MDhmOTI0UJyA4A==: 00:27:27.773 14:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDc1YzYwYWQ0OWFlNTZjNjhhNTJlZmFjMmRkZjE0MzRFixCB: ]] 00:27:27.773 14:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDc1YzYwYWQ0OWFlNTZjNjhhNTJlZmFjMmRkZjE0MzRFixCB: 00:27:27.773 14:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:27:27.773 14:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.773 14:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:27.773 14:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:27.773 14:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:27.773 14:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.773 14:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:27.773 14:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.773 14:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.773 14:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.773 14:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.773 14:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:27.773 14:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:27.773 14:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:27.773 14:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.773 14:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.773 14:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:27.773 14:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.773 14:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:27.773 14:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:27.773 14:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:27.773 14:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:27.773 14:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.773 14:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.716 nvme0n1 00:27:28.716 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.716 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.716 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.716 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.716 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.716 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.716 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.716 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.716 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.716 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.716 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.716 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.716 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:27:28.716 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.716 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:28.716 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:28.716 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:28.716 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWQwZTBmNDZkMWQ5NTk5ZjIwYmEyYmFkNjA0YWI1MWQxMGJmNTU4ZWQ1YTcxNDQ3NTM2YThlYjdiZjZiOGY1N5VSzg4=: 00:27:28.716 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:28.716 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:28.716 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:28.716 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWQwZTBmNDZkMWQ5NTk5ZjIwYmEyYmFkNjA0YWI1MWQxMGJmNTU4ZWQ1YTcxNDQ3NTM2YThlYjdiZjZiOGY1N5VSzg4=: 00:27:28.716 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:28.716 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:27:28.716 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.716 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:28.716 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:28.716 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:28.716 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.716 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:28.716 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.716 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.716 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.716 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.716 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:28.716 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:28.716 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:28.717 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.717 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.717 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:28.717 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.717 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:28.717 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:28.717 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:28.717 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:28.717 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.717 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.290 nvme0n1 00:27:29.290 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.290 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.290 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.290 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.290 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.290 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.290 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.290 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.290 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.290 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.290 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.290 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:29.290 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:29.290 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.290 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:27:29.290 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.290 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:29.290 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:29.290 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:29.290 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGRiN2ZiM2M4MmQzZGYwYjE5ZjAxNDc2ZGJmNTI3YWSiEjn5: 00:27:29.290 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTNjNTg2MTEwNDVjZjllYWRjNmI2NTY3MzRiZWFhNWE1MWQxMWExMDNkZDdjZTkxYzFkZmUwNmY4NzgyMzc5NFsGusY=: 00:27:29.290 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:29.290 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:29.290 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGRiN2ZiM2M4MmQzZGYwYjE5ZjAxNDc2ZGJmNTI3YWSiEjn5: 00:27:29.290 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTNjNTg2MTEwNDVjZjllYWRjNmI2NTY3MzRiZWFhNWE1MWQxMWExMDNkZDdjZTkxYzFkZmUwNmY4NzgyMzc5NFsGusY=: ]] 00:27:29.290 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTNjNTg2MTEwNDVjZjllYWRjNmI2NTY3MzRiZWFhNWE1MWQxMWExMDNkZDdjZTkxYzFkZmUwNmY4NzgyMzc5NFsGusY=: 00:27:29.290 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:27:29.290 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.290 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:29.290 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:29.290 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:29.290 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.290 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:29.290 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.290 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.290 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.290 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.290 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:29.290 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:29.290 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:29.290 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.290 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.290 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:29.290 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.580 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:29.580 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:29.580 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:29.580 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:29.580 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.580 14:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.580 nvme0n1 00:27:29.580 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.580 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.580 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.580 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.580 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.580 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.580 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.580 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.580 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.580 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.580 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.580 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.580 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:27:29.580 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.580 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:29.580 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:29.580 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:29.580 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmI0ZTA0ZWRjNzk1OTFmNWUyODM3N2RmYjI1OWQ1OTA5YjM5MDY5MjZkZjU5ZDIxJ0OKSQ==: 00:27:29.580 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWYxZjU2NDBkZTQ0OWI1NTI0NzhhYzZhMTYyODliYTg3NGNmMWZjYjBjYmRlZGEyXvuPMw==: 00:27:29.580 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:29.580 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:29.580 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmI0ZTA0ZWRjNzk1OTFmNWUyODM3N2RmYjI1OWQ1OTA5YjM5MDY5MjZkZjU5ZDIxJ0OKSQ==: 00:27:29.581 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWYxZjU2NDBkZTQ0OWI1NTI0NzhhYzZhMTYyODliYTg3NGNmMWZjYjBjYmRlZGEyXvuPMw==: ]] 00:27:29.581 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWYxZjU2NDBkZTQ0OWI1NTI0NzhhYzZhMTYyODliYTg3NGNmMWZjYjBjYmRlZGEyXvuPMw==: 00:27:29.581 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:27:29.581 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.581 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:29.581 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:29.581 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:29.581 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.581 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:29.581 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.581 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.581 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.581 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.581 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:29.581 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:29.581 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:29.581 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.581 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.581 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:29.581 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.581 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:29.581 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:29.581 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:29.581 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:29.581 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.581 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.844 nvme0n1 00:27:29.844 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.844 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.844 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.844 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.844 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.844 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.844 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.844 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.844 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.844 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.844 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.845 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.845 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:27:29.845 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.845 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:29.845 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:29.845 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:29.845 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTk1MWQ5Y2E3MTdkZGU0OTgwZGRjNGM1ZmVlNjJkOGKqN+LY: 00:27:29.845 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmQyYTkxMjg1NzE0ZTVlNjE0MDMwMzI1N2NiNTQzNmKymp7m: 00:27:29.845 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:29.845 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:29.845 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTk1MWQ5Y2E3MTdkZGU0OTgwZGRjNGM1ZmVlNjJkOGKqN+LY: 00:27:29.845 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmQyYTkxMjg1NzE0ZTVlNjE0MDMwMzI1N2NiNTQzNmKymp7m: ]] 00:27:29.845 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmQyYTkxMjg1NzE0ZTVlNjE0MDMwMzI1N2NiNTQzNmKymp7m: 00:27:29.845 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:27:29.845 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.845 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:29.845 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:29.845 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:29.845 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.845 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:29.845 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.845 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.845 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.845 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.845 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:29.845 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:29.845 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:29.845 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.845 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.845 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:29.845 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.845 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:29.845 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:29.845 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:29.845 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:29.845 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.845 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.845 nvme0n1 00:27:29.845 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.845 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.845 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.845 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.845 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjI4ZmMxOGQ2MjA0YmI2YjI2NTE0NzgyNzFhNzNmYmZiYTYwYzFkMTM3MDhmOTI0UJyA4A==: 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDc1YzYwYWQ0OWFlNTZjNjhhNTJlZmFjMmRkZjE0MzRFixCB: 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjI4ZmMxOGQ2MjA0YmI2YjI2NTE0NzgyNzFhNzNmYmZiYTYwYzFkMTM3MDhmOTI0UJyA4A==: 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDc1YzYwYWQ0OWFlNTZjNjhhNTJlZmFjMmRkZjE0MzRFixCB: ]] 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDc1YzYwYWQ0OWFlNTZjNjhhNTJlZmFjMmRkZjE0MzRFixCB: 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.107 nvme0n1 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWQwZTBmNDZkMWQ5NTk5ZjIwYmEyYmFkNjA0YWI1MWQxMGJmNTU4ZWQ1YTcxNDQ3NTM2YThlYjdiZjZiOGY1N5VSzg4=: 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWQwZTBmNDZkMWQ5NTk5ZjIwYmEyYmFkNjA0YWI1MWQxMGJmNTU4ZWQ1YTcxNDQ3NTM2YThlYjdiZjZiOGY1N5VSzg4=: 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.107 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.368 nvme0n1 00:27:30.368 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.368 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.368 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.368 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.368 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.368 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.368 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.368 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.368 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.368 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.368 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.368 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:30.368 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.368 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:27:30.368 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.368 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:30.368 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:30.368 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:30.368 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGRiN2ZiM2M4MmQzZGYwYjE5ZjAxNDc2ZGJmNTI3YWSiEjn5: 00:27:30.368 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTNjNTg2MTEwNDVjZjllYWRjNmI2NTY3MzRiZWFhNWE1MWQxMWExMDNkZDdjZTkxYzFkZmUwNmY4NzgyMzc5NFsGusY=: 00:27:30.368 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:30.368 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:30.368 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGRiN2ZiM2M4MmQzZGYwYjE5ZjAxNDc2ZGJmNTI3YWSiEjn5: 00:27:30.368 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTNjNTg2MTEwNDVjZjllYWRjNmI2NTY3MzRiZWFhNWE1MWQxMWExMDNkZDdjZTkxYzFkZmUwNmY4NzgyMzc5NFsGusY=: ]] 00:27:30.368 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTNjNTg2MTEwNDVjZjllYWRjNmI2NTY3MzRiZWFhNWE1MWQxMWExMDNkZDdjZTkxYzFkZmUwNmY4NzgyMzc5NFsGusY=: 00:27:30.368 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:27:30.368 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.368 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:30.368 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:30.368 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:30.368 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.368 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:30.368 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.368 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.368 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.368 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.368 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:30.368 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:30.368 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:30.368 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.368 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.368 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:30.368 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.368 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:30.368 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:30.368 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:30.368 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:30.368 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.368 14:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.629 nvme0n1 00:27:30.629 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.629 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.629 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.629 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.629 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.629 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.629 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.629 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.629 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.629 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.629 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.629 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.629 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:27:30.629 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.629 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:30.629 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:30.629 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:30.629 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmI0ZTA0ZWRjNzk1OTFmNWUyODM3N2RmYjI1OWQ1OTA5YjM5MDY5MjZkZjU5ZDIxJ0OKSQ==: 00:27:30.629 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWYxZjU2NDBkZTQ0OWI1NTI0NzhhYzZhMTYyODliYTg3NGNmMWZjYjBjYmRlZGEyXvuPMw==: 00:27:30.629 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:30.629 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:30.629 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmI0ZTA0ZWRjNzk1OTFmNWUyODM3N2RmYjI1OWQ1OTA5YjM5MDY5MjZkZjU5ZDIxJ0OKSQ==: 00:27:30.629 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWYxZjU2NDBkZTQ0OWI1NTI0NzhhYzZhMTYyODliYTg3NGNmMWZjYjBjYmRlZGEyXvuPMw==: ]] 00:27:30.629 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWYxZjU2NDBkZTQ0OWI1NTI0NzhhYzZhMTYyODliYTg3NGNmMWZjYjBjYmRlZGEyXvuPMw==: 00:27:30.629 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:27:30.629 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.629 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:30.629 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:30.629 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:30.629 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.630 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:30.630 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.630 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.630 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.630 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.630 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:30.630 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:30.630 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:30.630 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.630 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.630 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:30.630 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.630 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:30.630 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:30.630 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:30.630 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:30.630 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.630 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.891 nvme0n1 00:27:30.891 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.891 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.891 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.891 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.891 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.891 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.891 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.891 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.891 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.891 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.891 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.891 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.891 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:27:30.891 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.891 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:30.891 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:30.891 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:30.891 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTk1MWQ5Y2E3MTdkZGU0OTgwZGRjNGM1ZmVlNjJkOGKqN+LY: 00:27:30.891 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmQyYTkxMjg1NzE0ZTVlNjE0MDMwMzI1N2NiNTQzNmKymp7m: 00:27:30.892 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:30.892 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:30.892 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTk1MWQ5Y2E3MTdkZGU0OTgwZGRjNGM1ZmVlNjJkOGKqN+LY: 00:27:30.892 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmQyYTkxMjg1NzE0ZTVlNjE0MDMwMzI1N2NiNTQzNmKymp7m: ]] 00:27:30.892 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmQyYTkxMjg1NzE0ZTVlNjE0MDMwMzI1N2NiNTQzNmKymp7m: 00:27:30.892 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:27:30.892 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.892 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:30.892 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:30.892 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:30.892 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.892 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:30.892 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.892 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.892 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.892 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.892 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:30.892 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:30.892 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:30.892 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.892 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.892 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:30.892 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.892 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:30.892 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:30.892 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:30.892 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:30.892 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.892 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.153 nvme0n1 00:27:31.153 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.153 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.153 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.153 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.153 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.153 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.153 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.153 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.153 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.153 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.153 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.153 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.153 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:27:31.153 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.153 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:31.153 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:31.153 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:31.153 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjI4ZmMxOGQ2MjA0YmI2YjI2NTE0NzgyNzFhNzNmYmZiYTYwYzFkMTM3MDhmOTI0UJyA4A==: 00:27:31.153 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDc1YzYwYWQ0OWFlNTZjNjhhNTJlZmFjMmRkZjE0MzRFixCB: 00:27:31.153 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:31.153 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:31.153 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjI4ZmMxOGQ2MjA0YmI2YjI2NTE0NzgyNzFhNzNmYmZiYTYwYzFkMTM3MDhmOTI0UJyA4A==: 00:27:31.153 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDc1YzYwYWQ0OWFlNTZjNjhhNTJlZmFjMmRkZjE0MzRFixCB: ]] 00:27:31.153 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDc1YzYwYWQ0OWFlNTZjNjhhNTJlZmFjMmRkZjE0MzRFixCB: 00:27:31.153 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:27:31.153 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.153 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:31.153 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:31.153 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:31.153 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.153 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:31.154 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.154 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.154 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.154 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.154 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:31.154 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:31.154 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:31.154 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.154 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.154 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:31.154 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.154 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:31.154 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:31.154 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:31.154 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:31.154 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.154 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.154 nvme0n1 00:27:31.154 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.154 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.154 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.154 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.154 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.154 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.415 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.415 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.415 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.415 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.415 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.415 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.415 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:27:31.415 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.415 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:31.415 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:31.415 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:31.415 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWQwZTBmNDZkMWQ5NTk5ZjIwYmEyYmFkNjA0YWI1MWQxMGJmNTU4ZWQ1YTcxNDQ3NTM2YThlYjdiZjZiOGY1N5VSzg4=: 00:27:31.415 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:31.415 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:31.415 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:31.415 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWQwZTBmNDZkMWQ5NTk5ZjIwYmEyYmFkNjA0YWI1MWQxMGJmNTU4ZWQ1YTcxNDQ3NTM2YThlYjdiZjZiOGY1N5VSzg4=: 00:27:31.415 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:31.415 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:27:31.415 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.415 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:31.415 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:31.415 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:31.415 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.415 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:31.415 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.415 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.415 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.415 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.415 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:31.415 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:31.415 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:31.415 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.415 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.415 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:31.415 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.415 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:31.415 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:31.415 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:31.415 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:31.415 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.415 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.415 nvme0n1 00:27:31.415 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.415 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.415 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.415 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.415 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.415 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.415 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.415 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.415 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.415 14:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.415 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.415 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:31.415 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.415 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:27:31.415 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.415 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:31.415 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:31.415 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:31.415 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGRiN2ZiM2M4MmQzZGYwYjE5ZjAxNDc2ZGJmNTI3YWSiEjn5: 00:27:31.415 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTNjNTg2MTEwNDVjZjllYWRjNmI2NTY3MzRiZWFhNWE1MWQxMWExMDNkZDdjZTkxYzFkZmUwNmY4NzgyMzc5NFsGusY=: 00:27:31.415 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:31.415 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:31.415 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGRiN2ZiM2M4MmQzZGYwYjE5ZjAxNDc2ZGJmNTI3YWSiEjn5: 00:27:31.415 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTNjNTg2MTEwNDVjZjllYWRjNmI2NTY3MzRiZWFhNWE1MWQxMWExMDNkZDdjZTkxYzFkZmUwNmY4NzgyMzc5NFsGusY=: ]] 00:27:31.415 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTNjNTg2MTEwNDVjZjllYWRjNmI2NTY3MzRiZWFhNWE1MWQxMWExMDNkZDdjZTkxYzFkZmUwNmY4NzgyMzc5NFsGusY=: 00:27:31.415 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:27:31.415 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.415 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:31.415 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:31.415 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:31.415 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.415 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:31.415 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.415 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.415 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.415 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.415 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:31.415 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:31.415 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:31.415 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.415 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.415 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:31.415 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.415 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:31.416 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:31.416 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:31.416 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:31.416 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.416 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.676 nvme0n1 00:27:31.676 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.676 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.676 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.676 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.676 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.676 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.937 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.937 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.937 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.937 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.937 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.937 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.937 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:27:31.937 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.937 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:31.937 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:31.937 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:31.937 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmI0ZTA0ZWRjNzk1OTFmNWUyODM3N2RmYjI1OWQ1OTA5YjM5MDY5MjZkZjU5ZDIxJ0OKSQ==: 00:27:31.937 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWYxZjU2NDBkZTQ0OWI1NTI0NzhhYzZhMTYyODliYTg3NGNmMWZjYjBjYmRlZGEyXvuPMw==: 00:27:31.937 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:31.937 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:31.937 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmI0ZTA0ZWRjNzk1OTFmNWUyODM3N2RmYjI1OWQ1OTA5YjM5MDY5MjZkZjU5ZDIxJ0OKSQ==: 00:27:31.937 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWYxZjU2NDBkZTQ0OWI1NTI0NzhhYzZhMTYyODliYTg3NGNmMWZjYjBjYmRlZGEyXvuPMw==: ]] 00:27:31.937 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWYxZjU2NDBkZTQ0OWI1NTI0NzhhYzZhMTYyODliYTg3NGNmMWZjYjBjYmRlZGEyXvuPMw==: 00:27:31.937 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:27:31.937 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.937 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:31.937 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:31.937 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:31.937 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.937 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:31.937 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.937 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.937 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.938 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.938 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:31.938 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:31.938 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:31.938 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.938 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.938 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:31.938 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.938 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:31.938 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:31.938 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:31.938 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:31.938 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.938 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.938 nvme0n1 00:27:31.938 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.938 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.938 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.197 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.197 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.197 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.197 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.197 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.197 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.197 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.197 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.197 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.197 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:27:32.197 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.197 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:32.197 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:32.197 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:32.197 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTk1MWQ5Y2E3MTdkZGU0OTgwZGRjNGM1ZmVlNjJkOGKqN+LY: 00:27:32.197 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmQyYTkxMjg1NzE0ZTVlNjE0MDMwMzI1N2NiNTQzNmKymp7m: 00:27:32.197 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:32.197 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:32.197 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTk1MWQ5Y2E3MTdkZGU0OTgwZGRjNGM1ZmVlNjJkOGKqN+LY: 00:27:32.197 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmQyYTkxMjg1NzE0ZTVlNjE0MDMwMzI1N2NiNTQzNmKymp7m: ]] 00:27:32.197 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmQyYTkxMjg1NzE0ZTVlNjE0MDMwMzI1N2NiNTQzNmKymp7m: 00:27:32.197 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:27:32.197 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.197 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:32.197 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:32.197 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:32.197 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.197 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:32.197 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.197 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.197 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.197 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.197 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:32.197 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:32.197 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:32.197 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.197 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.197 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:32.197 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.197 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:32.197 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:32.197 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:32.197 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:32.197 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.197 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.459 nvme0n1 00:27:32.459 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.459 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.459 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.459 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.459 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.459 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.459 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.459 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.459 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.459 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.459 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.459 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.459 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:27:32.459 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.459 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:32.459 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:32.459 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:32.459 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjI4ZmMxOGQ2MjA0YmI2YjI2NTE0NzgyNzFhNzNmYmZiYTYwYzFkMTM3MDhmOTI0UJyA4A==: 00:27:32.459 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDc1YzYwYWQ0OWFlNTZjNjhhNTJlZmFjMmRkZjE0MzRFixCB: 00:27:32.459 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:32.459 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:32.459 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjI4ZmMxOGQ2MjA0YmI2YjI2NTE0NzgyNzFhNzNmYmZiYTYwYzFkMTM3MDhmOTI0UJyA4A==: 00:27:32.459 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDc1YzYwYWQ0OWFlNTZjNjhhNTJlZmFjMmRkZjE0MzRFixCB: ]] 00:27:32.459 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDc1YzYwYWQ0OWFlNTZjNjhhNTJlZmFjMmRkZjE0MzRFixCB: 00:27:32.459 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:27:32.459 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.459 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:32.459 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:32.459 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:32.459 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.459 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:32.459 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.459 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.459 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.459 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.459 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:32.459 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:32.459 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:32.459 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.459 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.459 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:32.459 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.459 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:32.459 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:32.459 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:32.459 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:32.459 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.459 14:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.719 nvme0n1 00:27:32.719 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.719 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.719 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.719 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.719 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.719 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.719 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.719 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.719 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.719 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.719 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.719 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.719 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:27:32.719 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.719 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:32.719 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:32.719 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:32.719 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWQwZTBmNDZkMWQ5NTk5ZjIwYmEyYmFkNjA0YWI1MWQxMGJmNTU4ZWQ1YTcxNDQ3NTM2YThlYjdiZjZiOGY1N5VSzg4=: 00:27:32.719 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:32.719 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:32.719 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:32.719 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWQwZTBmNDZkMWQ5NTk5ZjIwYmEyYmFkNjA0YWI1MWQxMGJmNTU4ZWQ1YTcxNDQ3NTM2YThlYjdiZjZiOGY1N5VSzg4=: 00:27:32.719 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:32.719 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:27:32.719 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.719 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:32.719 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:32.719 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:32.719 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.719 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:32.719 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.719 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.719 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.719 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.719 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:32.719 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:32.719 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:32.719 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.719 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.719 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:32.719 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.719 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:32.719 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:32.719 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:32.719 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:32.719 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.719 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.978 nvme0n1 00:27:32.978 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.978 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.978 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.978 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.978 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.978 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.978 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.978 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.978 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.978 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.978 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.978 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:32.979 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.979 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:27:32.979 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.979 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:32.979 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:32.979 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:32.979 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGRiN2ZiM2M4MmQzZGYwYjE5ZjAxNDc2ZGJmNTI3YWSiEjn5: 00:27:32.979 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTNjNTg2MTEwNDVjZjllYWRjNmI2NTY3MzRiZWFhNWE1MWQxMWExMDNkZDdjZTkxYzFkZmUwNmY4NzgyMzc5NFsGusY=: 00:27:32.979 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:32.979 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:32.979 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGRiN2ZiM2M4MmQzZGYwYjE5ZjAxNDc2ZGJmNTI3YWSiEjn5: 00:27:32.979 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTNjNTg2MTEwNDVjZjllYWRjNmI2NTY3MzRiZWFhNWE1MWQxMWExMDNkZDdjZTkxYzFkZmUwNmY4NzgyMzc5NFsGusY=: ]] 00:27:32.979 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTNjNTg2MTEwNDVjZjllYWRjNmI2NTY3MzRiZWFhNWE1MWQxMWExMDNkZDdjZTkxYzFkZmUwNmY4NzgyMzc5NFsGusY=: 00:27:32.979 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:27:32.979 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.979 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:32.979 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:32.979 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:32.979 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.979 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:32.979 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.979 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.979 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.979 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.979 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:32.979 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:32.979 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:32.979 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.979 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.979 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:32.979 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.979 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:32.979 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:32.979 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:32.979 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:32.979 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.979 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.237 nvme0n1 00:27:33.237 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.237 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.237 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.237 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.237 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.237 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.496 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.496 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.496 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.496 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.496 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.496 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.496 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:27:33.496 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.496 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:33.496 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:33.496 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:33.496 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmI0ZTA0ZWRjNzk1OTFmNWUyODM3N2RmYjI1OWQ1OTA5YjM5MDY5MjZkZjU5ZDIxJ0OKSQ==: 00:27:33.496 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWYxZjU2NDBkZTQ0OWI1NTI0NzhhYzZhMTYyODliYTg3NGNmMWZjYjBjYmRlZGEyXvuPMw==: 00:27:33.496 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:33.496 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:33.496 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmI0ZTA0ZWRjNzk1OTFmNWUyODM3N2RmYjI1OWQ1OTA5YjM5MDY5MjZkZjU5ZDIxJ0OKSQ==: 00:27:33.496 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWYxZjU2NDBkZTQ0OWI1NTI0NzhhYzZhMTYyODliYTg3NGNmMWZjYjBjYmRlZGEyXvuPMw==: ]] 00:27:33.496 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWYxZjU2NDBkZTQ0OWI1NTI0NzhhYzZhMTYyODliYTg3NGNmMWZjYjBjYmRlZGEyXvuPMw==: 00:27:33.496 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:27:33.496 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.496 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:33.496 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:33.496 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:33.496 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.496 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:33.496 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.496 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.496 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.496 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.496 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:33.496 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:33.496 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:33.496 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.496 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.496 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:33.496 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.496 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:33.496 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:33.496 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:33.496 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:33.496 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.496 14:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.754 nvme0n1 00:27:33.754 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.754 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.754 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.754 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.754 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.754 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.754 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.754 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.754 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.754 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.754 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.754 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.754 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:27:33.754 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.754 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:33.754 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:33.754 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:33.754 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTk1MWQ5Y2E3MTdkZGU0OTgwZGRjNGM1ZmVlNjJkOGKqN+LY: 00:27:33.754 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmQyYTkxMjg1NzE0ZTVlNjE0MDMwMzI1N2NiNTQzNmKymp7m: 00:27:33.754 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:33.754 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:33.754 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTk1MWQ5Y2E3MTdkZGU0OTgwZGRjNGM1ZmVlNjJkOGKqN+LY: 00:27:33.754 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmQyYTkxMjg1NzE0ZTVlNjE0MDMwMzI1N2NiNTQzNmKymp7m: ]] 00:27:33.754 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmQyYTkxMjg1NzE0ZTVlNjE0MDMwMzI1N2NiNTQzNmKymp7m: 00:27:33.754 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:27:33.754 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.754 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:33.754 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:33.754 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:33.754 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.754 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:33.754 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.754 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.754 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.754 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.754 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:33.754 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:33.755 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:33.755 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.755 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.755 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:33.755 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.755 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:33.755 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:33.755 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:33.755 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:33.755 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.755 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.320 nvme0n1 00:27:34.320 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.320 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.320 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.320 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.320 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.320 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.320 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.320 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.320 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.320 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.320 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.320 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.320 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:27:34.320 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.320 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:34.320 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:34.320 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:34.320 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjI4ZmMxOGQ2MjA0YmI2YjI2NTE0NzgyNzFhNzNmYmZiYTYwYzFkMTM3MDhmOTI0UJyA4A==: 00:27:34.320 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDc1YzYwYWQ0OWFlNTZjNjhhNTJlZmFjMmRkZjE0MzRFixCB: 00:27:34.320 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:34.320 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:34.320 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjI4ZmMxOGQ2MjA0YmI2YjI2NTE0NzgyNzFhNzNmYmZiYTYwYzFkMTM3MDhmOTI0UJyA4A==: 00:27:34.320 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDc1YzYwYWQ0OWFlNTZjNjhhNTJlZmFjMmRkZjE0MzRFixCB: ]] 00:27:34.320 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDc1YzYwYWQ0OWFlNTZjNjhhNTJlZmFjMmRkZjE0MzRFixCB: 00:27:34.320 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:27:34.320 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.320 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:34.320 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:34.320 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:34.320 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.320 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:34.320 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.320 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.320 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.320 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.320 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:34.320 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:34.320 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:34.320 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.320 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.320 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:34.320 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.320 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:34.320 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:34.320 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:34.320 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:34.320 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.320 14:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.579 nvme0n1 00:27:34.579 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.579 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.579 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.579 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.579 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.579 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.579 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.579 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.579 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.579 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.579 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.579 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.579 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:27:34.579 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.579 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:34.579 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:34.579 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:34.579 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWQwZTBmNDZkMWQ5NTk5ZjIwYmEyYmFkNjA0YWI1MWQxMGJmNTU4ZWQ1YTcxNDQ3NTM2YThlYjdiZjZiOGY1N5VSzg4=: 00:27:34.579 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:34.579 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:34.579 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:34.579 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWQwZTBmNDZkMWQ5NTk5ZjIwYmEyYmFkNjA0YWI1MWQxMGJmNTU4ZWQ1YTcxNDQ3NTM2YThlYjdiZjZiOGY1N5VSzg4=: 00:27:34.579 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:34.579 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:27:34.579 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.579 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:34.579 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:34.579 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:34.579 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.579 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:34.579 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.579 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.579 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.579 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.579 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:34.579 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:34.579 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:34.579 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.579 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.579 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:34.579 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.579 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:34.579 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:34.579 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:34.579 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:34.579 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.579 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.145 nvme0n1 00:27:35.145 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.145 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.145 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.145 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.145 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.145 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.145 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.145 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.145 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.145 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.145 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.145 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:35.145 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.145 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:27:35.145 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.145 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:35.145 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:35.145 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:35.145 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGRiN2ZiM2M4MmQzZGYwYjE5ZjAxNDc2ZGJmNTI3YWSiEjn5: 00:27:35.145 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTNjNTg2MTEwNDVjZjllYWRjNmI2NTY3MzRiZWFhNWE1MWQxMWExMDNkZDdjZTkxYzFkZmUwNmY4NzgyMzc5NFsGusY=: 00:27:35.145 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:35.145 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:35.145 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGRiN2ZiM2M4MmQzZGYwYjE5ZjAxNDc2ZGJmNTI3YWSiEjn5: 00:27:35.145 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTNjNTg2MTEwNDVjZjllYWRjNmI2NTY3MzRiZWFhNWE1MWQxMWExMDNkZDdjZTkxYzFkZmUwNmY4NzgyMzc5NFsGusY=: ]] 00:27:35.145 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTNjNTg2MTEwNDVjZjllYWRjNmI2NTY3MzRiZWFhNWE1MWQxMWExMDNkZDdjZTkxYzFkZmUwNmY4NzgyMzc5NFsGusY=: 00:27:35.145 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:27:35.145 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.145 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:35.145 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:35.145 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:35.145 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.145 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:35.145 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.145 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.145 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.145 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.145 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:35.145 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:35.145 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:35.145 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.145 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.145 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:35.145 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.145 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:35.145 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:35.145 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:35.145 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:35.145 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.145 14:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.711 nvme0n1 00:27:35.711 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.711 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.711 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.711 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.711 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.711 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.711 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.711 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.711 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.711 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.711 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.711 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.711 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:27:35.711 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.711 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:35.711 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:35.711 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:35.711 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmI0ZTA0ZWRjNzk1OTFmNWUyODM3N2RmYjI1OWQ1OTA5YjM5MDY5MjZkZjU5ZDIxJ0OKSQ==: 00:27:35.711 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWYxZjU2NDBkZTQ0OWI1NTI0NzhhYzZhMTYyODliYTg3NGNmMWZjYjBjYmRlZGEyXvuPMw==: 00:27:35.711 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:35.711 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:35.711 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmI0ZTA0ZWRjNzk1OTFmNWUyODM3N2RmYjI1OWQ1OTA5YjM5MDY5MjZkZjU5ZDIxJ0OKSQ==: 00:27:35.711 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWYxZjU2NDBkZTQ0OWI1NTI0NzhhYzZhMTYyODliYTg3NGNmMWZjYjBjYmRlZGEyXvuPMw==: ]] 00:27:35.711 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWYxZjU2NDBkZTQ0OWI1NTI0NzhhYzZhMTYyODliYTg3NGNmMWZjYjBjYmRlZGEyXvuPMw==: 00:27:35.711 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:27:35.711 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.711 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:35.711 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:35.711 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:35.711 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.711 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:35.711 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.711 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.711 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.711 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.711 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:35.711 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:35.711 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:35.711 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.711 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.712 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:35.712 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.712 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:35.712 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:35.712 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:35.712 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:35.712 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.712 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.278 nvme0n1 00:27:36.278 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.278 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.278 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.278 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.278 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.278 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.278 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.278 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.278 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.278 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.536 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.536 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.536 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:27:36.536 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.536 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:36.536 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:36.536 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:36.536 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTk1MWQ5Y2E3MTdkZGU0OTgwZGRjNGM1ZmVlNjJkOGKqN+LY: 00:27:36.536 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmQyYTkxMjg1NzE0ZTVlNjE0MDMwMzI1N2NiNTQzNmKymp7m: 00:27:36.536 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:36.536 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:36.536 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTk1MWQ5Y2E3MTdkZGU0OTgwZGRjNGM1ZmVlNjJkOGKqN+LY: 00:27:36.536 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmQyYTkxMjg1NzE0ZTVlNjE0MDMwMzI1N2NiNTQzNmKymp7m: ]] 00:27:36.536 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmQyYTkxMjg1NzE0ZTVlNjE0MDMwMzI1N2NiNTQzNmKymp7m: 00:27:36.536 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:27:36.536 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.536 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:36.536 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:36.536 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:36.536 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.536 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:36.536 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.536 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.536 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.536 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.536 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:36.536 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:36.536 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:36.536 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.536 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.536 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:36.536 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.536 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:36.536 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:36.536 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:36.536 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:36.536 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.536 14:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.103 nvme0n1 00:27:37.103 14:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.103 14:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.103 14:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.103 14:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.103 14:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.103 14:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.103 14:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.103 14:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.103 14:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.103 14:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.103 14:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.103 14:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.103 14:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:27:37.103 14:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.103 14:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:37.103 14:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:37.103 14:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:37.103 14:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjI4ZmMxOGQ2MjA0YmI2YjI2NTE0NzgyNzFhNzNmYmZiYTYwYzFkMTM3MDhmOTI0UJyA4A==: 00:27:37.103 14:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDc1YzYwYWQ0OWFlNTZjNjhhNTJlZmFjMmRkZjE0MzRFixCB: 00:27:37.103 14:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:37.103 14:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:37.103 14:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjI4ZmMxOGQ2MjA0YmI2YjI2NTE0NzgyNzFhNzNmYmZiYTYwYzFkMTM3MDhmOTI0UJyA4A==: 00:27:37.103 14:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDc1YzYwYWQ0OWFlNTZjNjhhNTJlZmFjMmRkZjE0MzRFixCB: ]] 00:27:37.103 14:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDc1YzYwYWQ0OWFlNTZjNjhhNTJlZmFjMmRkZjE0MzRFixCB: 00:27:37.103 14:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:27:37.103 14:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.103 14:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:37.103 14:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:37.103 14:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:37.103 14:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.103 14:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:37.103 14:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.103 14:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.103 14:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.103 14:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.103 14:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:37.103 14:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:37.103 14:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:37.103 14:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.103 14:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.103 14:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:37.103 14:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.103 14:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:37.103 14:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:37.103 14:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:37.103 14:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:37.103 14:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.103 14:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.361 nvme0n1 00:27:37.361 14:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.622 14:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.622 14:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.622 14:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.622 14:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.622 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.622 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.622 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.622 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.622 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.622 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.622 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.622 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:27:37.622 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.622 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:37.622 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:37.622 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:37.622 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWQwZTBmNDZkMWQ5NTk5ZjIwYmEyYmFkNjA0YWI1MWQxMGJmNTU4ZWQ1YTcxNDQ3NTM2YThlYjdiZjZiOGY1N5VSzg4=: 00:27:37.622 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:37.622 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:37.622 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:37.622 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWQwZTBmNDZkMWQ5NTk5ZjIwYmEyYmFkNjA0YWI1MWQxMGJmNTU4ZWQ1YTcxNDQ3NTM2YThlYjdiZjZiOGY1N5VSzg4=: 00:27:37.622 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:37.622 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:27:37.622 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.622 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:37.622 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:37.622 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:37.622 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.622 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:37.622 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.622 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.622 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.622 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.622 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:37.622 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:37.622 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:37.622 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.622 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.622 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:37.622 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.622 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:37.622 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:37.622 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:37.622 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:37.622 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.622 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.189 nvme0n1 00:27:38.189 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.189 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.189 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.189 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.189 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.189 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.189 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.189 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.189 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.189 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.189 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.189 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:38.189 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:38.189 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.189 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:27:38.189 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.189 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:38.189 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:38.189 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:38.189 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGRiN2ZiM2M4MmQzZGYwYjE5ZjAxNDc2ZGJmNTI3YWSiEjn5: 00:27:38.189 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTNjNTg2MTEwNDVjZjllYWRjNmI2NTY3MzRiZWFhNWE1MWQxMWExMDNkZDdjZTkxYzFkZmUwNmY4NzgyMzc5NFsGusY=: 00:27:38.189 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:38.189 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:38.189 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGRiN2ZiM2M4MmQzZGYwYjE5ZjAxNDc2ZGJmNTI3YWSiEjn5: 00:27:38.189 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTNjNTg2MTEwNDVjZjllYWRjNmI2NTY3MzRiZWFhNWE1MWQxMWExMDNkZDdjZTkxYzFkZmUwNmY4NzgyMzc5NFsGusY=: ]] 00:27:38.189 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTNjNTg2MTEwNDVjZjllYWRjNmI2NTY3MzRiZWFhNWE1MWQxMWExMDNkZDdjZTkxYzFkZmUwNmY4NzgyMzc5NFsGusY=: 00:27:38.189 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:27:38.189 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.189 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:38.189 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:38.189 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:38.189 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.189 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:38.189 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.189 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.189 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.189 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.189 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:38.189 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:38.189 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:38.189 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.189 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.189 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:38.189 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.189 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:38.189 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:38.189 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:38.189 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:38.189 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.189 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.189 nvme0n1 00:27:38.189 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.190 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.190 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.190 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.190 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.190 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.190 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.190 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.190 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.190 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.190 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.190 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.190 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:27:38.190 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.190 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:38.190 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:38.190 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:38.190 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmI0ZTA0ZWRjNzk1OTFmNWUyODM3N2RmYjI1OWQ1OTA5YjM5MDY5MjZkZjU5ZDIxJ0OKSQ==: 00:27:38.190 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWYxZjU2NDBkZTQ0OWI1NTI0NzhhYzZhMTYyODliYTg3NGNmMWZjYjBjYmRlZGEyXvuPMw==: 00:27:38.190 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:38.190 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:38.190 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmI0ZTA0ZWRjNzk1OTFmNWUyODM3N2RmYjI1OWQ1OTA5YjM5MDY5MjZkZjU5ZDIxJ0OKSQ==: 00:27:38.190 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWYxZjU2NDBkZTQ0OWI1NTI0NzhhYzZhMTYyODliYTg3NGNmMWZjYjBjYmRlZGEyXvuPMw==: ]] 00:27:38.190 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWYxZjU2NDBkZTQ0OWI1NTI0NzhhYzZhMTYyODliYTg3NGNmMWZjYjBjYmRlZGEyXvuPMw==: 00:27:38.190 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:27:38.190 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.190 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:38.190 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:38.190 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:38.190 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.190 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:38.190 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.190 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.190 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.190 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.190 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:38.190 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:38.190 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:38.190 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.190 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.190 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:38.190 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.190 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:38.190 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:38.190 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:38.190 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:38.190 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.190 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.190 nvme0n1 00:27:38.190 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.190 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.190 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.190 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.190 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.448 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.448 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.448 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.449 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.449 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.449 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.449 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.449 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:27:38.449 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.449 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:38.449 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:38.449 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:38.449 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTk1MWQ5Y2E3MTdkZGU0OTgwZGRjNGM1ZmVlNjJkOGKqN+LY: 00:27:38.449 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmQyYTkxMjg1NzE0ZTVlNjE0MDMwMzI1N2NiNTQzNmKymp7m: 00:27:38.449 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:38.449 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:38.449 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTk1MWQ5Y2E3MTdkZGU0OTgwZGRjNGM1ZmVlNjJkOGKqN+LY: 00:27:38.449 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmQyYTkxMjg1NzE0ZTVlNjE0MDMwMzI1N2NiNTQzNmKymp7m: ]] 00:27:38.449 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmQyYTkxMjg1NzE0ZTVlNjE0MDMwMzI1N2NiNTQzNmKymp7m: 00:27:38.449 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:27:38.449 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.449 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:38.449 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:38.449 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:38.449 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.449 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:38.449 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.449 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.449 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.449 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.449 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:38.449 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:38.449 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:38.449 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.449 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.449 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:38.449 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.449 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:38.449 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:38.449 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:38.449 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:38.449 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.449 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.449 nvme0n1 00:27:38.449 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.449 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.449 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.449 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.449 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.449 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.449 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.449 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.449 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.449 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.449 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.449 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.449 14:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:27:38.449 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.449 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:38.449 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:38.449 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:38.449 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjI4ZmMxOGQ2MjA0YmI2YjI2NTE0NzgyNzFhNzNmYmZiYTYwYzFkMTM3MDhmOTI0UJyA4A==: 00:27:38.449 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDc1YzYwYWQ0OWFlNTZjNjhhNTJlZmFjMmRkZjE0MzRFixCB: 00:27:38.449 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:38.449 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:38.449 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjI4ZmMxOGQ2MjA0YmI2YjI2NTE0NzgyNzFhNzNmYmZiYTYwYzFkMTM3MDhmOTI0UJyA4A==: 00:27:38.449 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDc1YzYwYWQ0OWFlNTZjNjhhNTJlZmFjMmRkZjE0MzRFixCB: ]] 00:27:38.449 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDc1YzYwYWQ0OWFlNTZjNjhhNTJlZmFjMmRkZjE0MzRFixCB: 00:27:38.449 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:27:38.449 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.449 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:38.449 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:38.449 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:38.449 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.449 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:38.449 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.449 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.449 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.449 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.449 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:38.449 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:38.449 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:38.449 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.449 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.449 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:38.449 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.449 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:38.449 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:38.449 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:38.449 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:38.449 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.449 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.709 nvme0n1 00:27:38.709 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.709 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.709 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.709 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWQwZTBmNDZkMWQ5NTk5ZjIwYmEyYmFkNjA0YWI1MWQxMGJmNTU4ZWQ1YTcxNDQ3NTM2YThlYjdiZjZiOGY1N5VSzg4=: 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWQwZTBmNDZkMWQ5NTk5ZjIwYmEyYmFkNjA0YWI1MWQxMGJmNTU4ZWQ1YTcxNDQ3NTM2YThlYjdiZjZiOGY1N5VSzg4=: 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.710 nvme0n1 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGRiN2ZiM2M4MmQzZGYwYjE5ZjAxNDc2ZGJmNTI3YWSiEjn5: 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTNjNTg2MTEwNDVjZjllYWRjNmI2NTY3MzRiZWFhNWE1MWQxMWExMDNkZDdjZTkxYzFkZmUwNmY4NzgyMzc5NFsGusY=: 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGRiN2ZiM2M4MmQzZGYwYjE5ZjAxNDc2ZGJmNTI3YWSiEjn5: 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTNjNTg2MTEwNDVjZjllYWRjNmI2NTY3MzRiZWFhNWE1MWQxMWExMDNkZDdjZTkxYzFkZmUwNmY4NzgyMzc5NFsGusY=: ]] 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTNjNTg2MTEwNDVjZjllYWRjNmI2NTY3MzRiZWFhNWE1MWQxMWExMDNkZDdjZTkxYzFkZmUwNmY4NzgyMzc5NFsGusY=: 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.710 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.979 nvme0n1 00:27:38.979 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.979 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.979 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.979 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.979 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.979 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.979 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.979 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.979 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.979 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.979 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.979 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.979 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:27:38.979 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.979 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:38.979 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:38.979 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:38.979 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmI0ZTA0ZWRjNzk1OTFmNWUyODM3N2RmYjI1OWQ1OTA5YjM5MDY5MjZkZjU5ZDIxJ0OKSQ==: 00:27:38.979 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWYxZjU2NDBkZTQ0OWI1NTI0NzhhYzZhMTYyODliYTg3NGNmMWZjYjBjYmRlZGEyXvuPMw==: 00:27:38.979 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:38.979 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:38.979 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmI0ZTA0ZWRjNzk1OTFmNWUyODM3N2RmYjI1OWQ1OTA5YjM5MDY5MjZkZjU5ZDIxJ0OKSQ==: 00:27:38.979 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWYxZjU2NDBkZTQ0OWI1NTI0NzhhYzZhMTYyODliYTg3NGNmMWZjYjBjYmRlZGEyXvuPMw==: ]] 00:27:38.979 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWYxZjU2NDBkZTQ0OWI1NTI0NzhhYzZhMTYyODliYTg3NGNmMWZjYjBjYmRlZGEyXvuPMw==: 00:27:38.979 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:27:38.979 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.979 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:38.980 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:38.980 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:38.980 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.980 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:38.980 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.980 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.980 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.980 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.980 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:38.980 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:38.980 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:38.980 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.980 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.980 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:38.980 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.980 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:38.980 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:38.980 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:38.980 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:38.980 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.980 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.980 nvme0n1 00:27:38.980 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.980 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.980 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.980 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.980 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.980 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.980 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.980 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.980 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.980 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.244 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.244 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:39.244 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:27:39.244 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.244 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:39.244 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:39.244 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:39.244 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTk1MWQ5Y2E3MTdkZGU0OTgwZGRjNGM1ZmVlNjJkOGKqN+LY: 00:27:39.244 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmQyYTkxMjg1NzE0ZTVlNjE0MDMwMzI1N2NiNTQzNmKymp7m: 00:27:39.244 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:39.244 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:39.244 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTk1MWQ5Y2E3MTdkZGU0OTgwZGRjNGM1ZmVlNjJkOGKqN+LY: 00:27:39.244 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmQyYTkxMjg1NzE0ZTVlNjE0MDMwMzI1N2NiNTQzNmKymp7m: ]] 00:27:39.244 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmQyYTkxMjg1NzE0ZTVlNjE0MDMwMzI1N2NiNTQzNmKymp7m: 00:27:39.244 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:27:39.244 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:39.245 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:39.245 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:39.245 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:39.245 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.245 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:39.245 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.245 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.245 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.245 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.245 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:39.245 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:39.245 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:39.245 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.245 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.245 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:39.245 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.245 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:39.245 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:39.245 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:39.245 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:39.245 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.245 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.245 nvme0n1 00:27:39.245 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.245 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.245 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:39.245 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.245 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.245 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.245 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.245 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.245 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.245 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.245 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.245 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:39.245 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:27:39.245 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.245 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:39.245 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:39.245 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:39.245 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjI4ZmMxOGQ2MjA0YmI2YjI2NTE0NzgyNzFhNzNmYmZiYTYwYzFkMTM3MDhmOTI0UJyA4A==: 00:27:39.245 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDc1YzYwYWQ0OWFlNTZjNjhhNTJlZmFjMmRkZjE0MzRFixCB: 00:27:39.245 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:39.245 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:39.245 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjI4ZmMxOGQ2MjA0YmI2YjI2NTE0NzgyNzFhNzNmYmZiYTYwYzFkMTM3MDhmOTI0UJyA4A==: 00:27:39.245 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDc1YzYwYWQ0OWFlNTZjNjhhNTJlZmFjMmRkZjE0MzRFixCB: ]] 00:27:39.245 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDc1YzYwYWQ0OWFlNTZjNjhhNTJlZmFjMmRkZjE0MzRFixCB: 00:27:39.245 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:27:39.245 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:39.245 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:39.245 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:39.245 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:39.245 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.245 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:39.245 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.245 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.245 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.245 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.245 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:39.245 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:39.245 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:39.245 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.245 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.245 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:39.245 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.245 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:39.245 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:39.245 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:39.245 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:39.245 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.245 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.505 nvme0n1 00:27:39.505 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.505 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.505 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:39.505 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.505 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.505 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.505 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.505 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.505 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.505 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.505 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.505 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:39.505 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:27:39.505 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.505 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:39.505 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:39.505 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:39.505 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWQwZTBmNDZkMWQ5NTk5ZjIwYmEyYmFkNjA0YWI1MWQxMGJmNTU4ZWQ1YTcxNDQ3NTM2YThlYjdiZjZiOGY1N5VSzg4=: 00:27:39.505 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:39.506 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:39.506 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:39.506 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWQwZTBmNDZkMWQ5NTk5ZjIwYmEyYmFkNjA0YWI1MWQxMGJmNTU4ZWQ1YTcxNDQ3NTM2YThlYjdiZjZiOGY1N5VSzg4=: 00:27:39.506 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:39.506 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:27:39.506 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:39.506 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:39.506 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:39.506 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:39.506 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.506 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:39.506 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.506 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.506 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.506 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.506 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:39.506 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:39.506 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:39.506 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.506 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.506 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:39.506 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.506 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:39.506 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:39.506 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:39.506 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:39.506 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.506 14:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.506 nvme0n1 00:27:39.506 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.506 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.506 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:39.506 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.506 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.506 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.506 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.506 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.506 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.506 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.506 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.506 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:39.506 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:39.506 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:27:39.506 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.506 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:39.506 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:39.506 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:39.506 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGRiN2ZiM2M4MmQzZGYwYjE5ZjAxNDc2ZGJmNTI3YWSiEjn5: 00:27:39.506 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTNjNTg2MTEwNDVjZjllYWRjNmI2NTY3MzRiZWFhNWE1MWQxMWExMDNkZDdjZTkxYzFkZmUwNmY4NzgyMzc5NFsGusY=: 00:27:39.506 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:39.506 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:39.506 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGRiN2ZiM2M4MmQzZGYwYjE5ZjAxNDc2ZGJmNTI3YWSiEjn5: 00:27:39.506 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTNjNTg2MTEwNDVjZjllYWRjNmI2NTY3MzRiZWFhNWE1MWQxMWExMDNkZDdjZTkxYzFkZmUwNmY4NzgyMzc5NFsGusY=: ]] 00:27:39.506 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTNjNTg2MTEwNDVjZjllYWRjNmI2NTY3MzRiZWFhNWE1MWQxMWExMDNkZDdjZTkxYzFkZmUwNmY4NzgyMzc5NFsGusY=: 00:27:39.506 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:27:39.506 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:39.506 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:39.506 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:39.506 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:39.506 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.506 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:39.506 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.506 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.506 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.506 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.506 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:39.506 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:39.506 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:39.506 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.506 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.506 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:39.506 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.506 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:39.506 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:39.506 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:39.506 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:39.506 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.506 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.765 nvme0n1 00:27:39.765 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.765 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:39.765 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.765 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.765 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.765 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.765 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.765 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.765 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.765 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.765 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.765 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:39.765 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:27:39.765 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.765 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:39.765 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:39.765 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:39.765 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmI0ZTA0ZWRjNzk1OTFmNWUyODM3N2RmYjI1OWQ1OTA5YjM5MDY5MjZkZjU5ZDIxJ0OKSQ==: 00:27:39.765 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWYxZjU2NDBkZTQ0OWI1NTI0NzhhYzZhMTYyODliYTg3NGNmMWZjYjBjYmRlZGEyXvuPMw==: 00:27:39.765 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:39.765 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:39.765 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmI0ZTA0ZWRjNzk1OTFmNWUyODM3N2RmYjI1OWQ1OTA5YjM5MDY5MjZkZjU5ZDIxJ0OKSQ==: 00:27:39.765 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWYxZjU2NDBkZTQ0OWI1NTI0NzhhYzZhMTYyODliYTg3NGNmMWZjYjBjYmRlZGEyXvuPMw==: ]] 00:27:39.765 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWYxZjU2NDBkZTQ0OWI1NTI0NzhhYzZhMTYyODliYTg3NGNmMWZjYjBjYmRlZGEyXvuPMw==: 00:27:39.765 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:27:39.765 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:39.765 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:39.765 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:39.765 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:39.765 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.765 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:39.765 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.765 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.765 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.024 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.024 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:40.024 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:40.024 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:40.024 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.024 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.024 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:40.024 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.024 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:40.024 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:40.024 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:40.024 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:40.024 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.024 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.024 nvme0n1 00:27:40.024 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.024 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.024 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.024 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.024 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.024 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.024 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.024 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.024 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.024 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.024 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.024 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.024 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:27:40.024 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.024 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:40.024 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:40.024 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:40.024 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTk1MWQ5Y2E3MTdkZGU0OTgwZGRjNGM1ZmVlNjJkOGKqN+LY: 00:27:40.024 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmQyYTkxMjg1NzE0ZTVlNjE0MDMwMzI1N2NiNTQzNmKymp7m: 00:27:40.024 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:40.024 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:40.024 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTk1MWQ5Y2E3MTdkZGU0OTgwZGRjNGM1ZmVlNjJkOGKqN+LY: 00:27:40.024 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmQyYTkxMjg1NzE0ZTVlNjE0MDMwMzI1N2NiNTQzNmKymp7m: ]] 00:27:40.024 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmQyYTkxMjg1NzE0ZTVlNjE0MDMwMzI1N2NiNTQzNmKymp7m: 00:27:40.024 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:27:40.024 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.024 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:40.024 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:40.024 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:40.024 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.024 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:40.024 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.024 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.024 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.024 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.024 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:40.024 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:40.024 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:40.024 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.024 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.024 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:40.024 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.024 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:40.024 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:40.024 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:40.024 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:40.024 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.024 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.283 nvme0n1 00:27:40.283 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.283 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.283 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.283 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.283 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.283 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.283 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.283 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.283 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.283 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.283 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.283 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.283 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:27:40.283 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.283 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:40.283 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:40.283 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:40.283 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjI4ZmMxOGQ2MjA0YmI2YjI2NTE0NzgyNzFhNzNmYmZiYTYwYzFkMTM3MDhmOTI0UJyA4A==: 00:27:40.283 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDc1YzYwYWQ0OWFlNTZjNjhhNTJlZmFjMmRkZjE0MzRFixCB: 00:27:40.283 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:40.283 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:40.283 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjI4ZmMxOGQ2MjA0YmI2YjI2NTE0NzgyNzFhNzNmYmZiYTYwYzFkMTM3MDhmOTI0UJyA4A==: 00:27:40.283 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDc1YzYwYWQ0OWFlNTZjNjhhNTJlZmFjMmRkZjE0MzRFixCB: ]] 00:27:40.283 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDc1YzYwYWQ0OWFlNTZjNjhhNTJlZmFjMmRkZjE0MzRFixCB: 00:27:40.283 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:27:40.283 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.283 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:40.283 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:40.283 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:40.283 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.283 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:40.283 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.283 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.283 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.283 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.283 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:40.283 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:40.283 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:40.283 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.283 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.283 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:40.283 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.283 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:40.283 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:40.283 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:40.283 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:40.283 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.283 14:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.541 nvme0n1 00:27:40.541 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.541 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.541 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.541 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.541 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.541 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.541 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.541 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.541 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.541 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.541 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.541 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.541 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:27:40.541 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.541 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:40.541 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:40.541 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:40.541 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWQwZTBmNDZkMWQ5NTk5ZjIwYmEyYmFkNjA0YWI1MWQxMGJmNTU4ZWQ1YTcxNDQ3NTM2YThlYjdiZjZiOGY1N5VSzg4=: 00:27:40.541 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:40.541 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:40.541 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:40.541 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWQwZTBmNDZkMWQ5NTk5ZjIwYmEyYmFkNjA0YWI1MWQxMGJmNTU4ZWQ1YTcxNDQ3NTM2YThlYjdiZjZiOGY1N5VSzg4=: 00:27:40.541 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:40.542 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:27:40.542 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.542 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:40.542 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:40.542 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:40.542 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.542 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:40.542 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.542 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.542 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.542 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.542 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:40.542 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:40.542 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:40.542 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.542 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.542 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:40.542 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.542 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:40.542 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:40.542 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:40.542 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:40.542 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.542 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.800 nvme0n1 00:27:40.800 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.800 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.800 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.800 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.800 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.800 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.800 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.800 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.800 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.800 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.800 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.800 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:40.800 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.800 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:27:40.800 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.800 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:40.800 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:40.800 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:40.800 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGRiN2ZiM2M4MmQzZGYwYjE5ZjAxNDc2ZGJmNTI3YWSiEjn5: 00:27:40.800 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTNjNTg2MTEwNDVjZjllYWRjNmI2NTY3MzRiZWFhNWE1MWQxMWExMDNkZDdjZTkxYzFkZmUwNmY4NzgyMzc5NFsGusY=: 00:27:40.800 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:40.800 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:40.800 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGRiN2ZiM2M4MmQzZGYwYjE5ZjAxNDc2ZGJmNTI3YWSiEjn5: 00:27:40.800 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTNjNTg2MTEwNDVjZjllYWRjNmI2NTY3MzRiZWFhNWE1MWQxMWExMDNkZDdjZTkxYzFkZmUwNmY4NzgyMzc5NFsGusY=: ]] 00:27:40.800 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTNjNTg2MTEwNDVjZjllYWRjNmI2NTY3MzRiZWFhNWE1MWQxMWExMDNkZDdjZTkxYzFkZmUwNmY4NzgyMzc5NFsGusY=: 00:27:40.800 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:27:40.800 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.800 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:40.800 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:40.800 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:40.800 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.800 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:40.800 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.800 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.800 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.800 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.800 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:40.800 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:40.800 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:40.800 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.800 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.800 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:40.800 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.800 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:40.800 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:40.800 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:40.800 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:40.800 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.800 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.366 nvme0n1 00:27:41.366 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.366 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.366 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.366 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.366 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.366 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.366 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.366 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.366 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.366 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.366 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.366 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.366 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:27:41.366 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.366 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:41.366 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:41.366 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:41.366 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmI0ZTA0ZWRjNzk1OTFmNWUyODM3N2RmYjI1OWQ1OTA5YjM5MDY5MjZkZjU5ZDIxJ0OKSQ==: 00:27:41.366 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWYxZjU2NDBkZTQ0OWI1NTI0NzhhYzZhMTYyODliYTg3NGNmMWZjYjBjYmRlZGEyXvuPMw==: 00:27:41.366 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:41.366 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:41.366 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmI0ZTA0ZWRjNzk1OTFmNWUyODM3N2RmYjI1OWQ1OTA5YjM5MDY5MjZkZjU5ZDIxJ0OKSQ==: 00:27:41.366 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWYxZjU2NDBkZTQ0OWI1NTI0NzhhYzZhMTYyODliYTg3NGNmMWZjYjBjYmRlZGEyXvuPMw==: ]] 00:27:41.366 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWYxZjU2NDBkZTQ0OWI1NTI0NzhhYzZhMTYyODliYTg3NGNmMWZjYjBjYmRlZGEyXvuPMw==: 00:27:41.366 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:27:41.366 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.366 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:41.366 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:41.366 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:41.366 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.366 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:41.366 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.366 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.366 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.366 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.366 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:41.366 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:41.366 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:41.366 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.366 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.366 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:41.366 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.366 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:41.366 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:41.366 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:41.366 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:41.366 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.366 14:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.625 nvme0n1 00:27:41.625 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.625 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.625 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.625 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.625 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.625 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.625 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.625 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.625 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.625 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.625 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.625 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.625 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:27:41.625 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.625 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:41.625 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:41.625 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:41.625 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTk1MWQ5Y2E3MTdkZGU0OTgwZGRjNGM1ZmVlNjJkOGKqN+LY: 00:27:41.625 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmQyYTkxMjg1NzE0ZTVlNjE0MDMwMzI1N2NiNTQzNmKymp7m: 00:27:41.625 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:41.625 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:41.625 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTk1MWQ5Y2E3MTdkZGU0OTgwZGRjNGM1ZmVlNjJkOGKqN+LY: 00:27:41.625 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmQyYTkxMjg1NzE0ZTVlNjE0MDMwMzI1N2NiNTQzNmKymp7m: ]] 00:27:41.625 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmQyYTkxMjg1NzE0ZTVlNjE0MDMwMzI1N2NiNTQzNmKymp7m: 00:27:41.625 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:27:41.625 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.625 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:41.625 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:41.625 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:41.625 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.625 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:41.625 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.625 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.625 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.625 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.625 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:41.625 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:41.625 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:41.625 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.625 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.625 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:41.625 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.625 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:41.625 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:41.625 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:41.625 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:41.625 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.625 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.884 nvme0n1 00:27:41.884 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.884 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.884 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.884 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.884 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.884 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.884 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.884 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.884 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.884 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.884 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.884 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.884 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:27:41.884 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.884 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:41.884 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:41.884 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:41.884 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjI4ZmMxOGQ2MjA0YmI2YjI2NTE0NzgyNzFhNzNmYmZiYTYwYzFkMTM3MDhmOTI0UJyA4A==: 00:27:41.884 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDc1YzYwYWQ0OWFlNTZjNjhhNTJlZmFjMmRkZjE0MzRFixCB: 00:27:41.884 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:41.884 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:41.884 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjI4ZmMxOGQ2MjA0YmI2YjI2NTE0NzgyNzFhNzNmYmZiYTYwYzFkMTM3MDhmOTI0UJyA4A==: 00:27:41.884 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDc1YzYwYWQ0OWFlNTZjNjhhNTJlZmFjMmRkZjE0MzRFixCB: ]] 00:27:41.884 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDc1YzYwYWQ0OWFlNTZjNjhhNTJlZmFjMmRkZjE0MzRFixCB: 00:27:41.884 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:27:41.884 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.884 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:41.884 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:41.884 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:41.884 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.884 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:41.884 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.884 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.884 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.884 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.884 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:41.884 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:41.884 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:41.884 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.884 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.884 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:41.884 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.884 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:41.884 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:41.884 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:41.884 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:41.884 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.884 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.142 nvme0n1 00:27:42.142 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.142 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.142 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.142 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.142 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.142 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.401 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.401 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.401 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.401 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.401 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.401 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.401 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:27:42.401 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.401 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:42.401 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:42.401 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:42.401 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWQwZTBmNDZkMWQ5NTk5ZjIwYmEyYmFkNjA0YWI1MWQxMGJmNTU4ZWQ1YTcxNDQ3NTM2YThlYjdiZjZiOGY1N5VSzg4=: 00:27:42.401 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:42.401 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:42.401 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:42.401 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWQwZTBmNDZkMWQ5NTk5ZjIwYmEyYmFkNjA0YWI1MWQxMGJmNTU4ZWQ1YTcxNDQ3NTM2YThlYjdiZjZiOGY1N5VSzg4=: 00:27:42.401 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:42.401 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:27:42.401 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.401 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:42.401 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:42.401 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:42.401 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.401 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:42.401 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.401 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.401 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.401 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.401 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:42.401 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:42.401 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:42.401 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.401 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.401 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:42.401 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.401 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:42.401 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:42.401 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:42.401 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:42.401 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.401 14:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.659 nvme0n1 00:27:42.659 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.659 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.659 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.659 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.659 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.659 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.659 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.659 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.659 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.659 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.659 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.659 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:42.659 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.659 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:27:42.660 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.660 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:42.660 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:42.660 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:42.660 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGRiN2ZiM2M4MmQzZGYwYjE5ZjAxNDc2ZGJmNTI3YWSiEjn5: 00:27:42.660 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTNjNTg2MTEwNDVjZjllYWRjNmI2NTY3MzRiZWFhNWE1MWQxMWExMDNkZDdjZTkxYzFkZmUwNmY4NzgyMzc5NFsGusY=: 00:27:42.660 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:42.660 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:42.660 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGRiN2ZiM2M4MmQzZGYwYjE5ZjAxNDc2ZGJmNTI3YWSiEjn5: 00:27:42.660 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTNjNTg2MTEwNDVjZjllYWRjNmI2NTY3MzRiZWFhNWE1MWQxMWExMDNkZDdjZTkxYzFkZmUwNmY4NzgyMzc5NFsGusY=: ]] 00:27:42.660 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTNjNTg2MTEwNDVjZjllYWRjNmI2NTY3MzRiZWFhNWE1MWQxMWExMDNkZDdjZTkxYzFkZmUwNmY4NzgyMzc5NFsGusY=: 00:27:42.660 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:27:42.660 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.660 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:42.660 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:42.660 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:42.660 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.660 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:42.660 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.660 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.660 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.660 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.660 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:42.660 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:42.660 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:42.660 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.660 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.660 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:42.660 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.660 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:42.660 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:42.660 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:42.660 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:42.660 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.660 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.226 nvme0n1 00:27:43.226 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.226 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.226 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.226 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.226 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.226 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.226 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.226 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.226 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.226 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.226 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.227 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.227 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:27:43.227 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.227 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:43.227 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:43.227 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:43.227 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmI0ZTA0ZWRjNzk1OTFmNWUyODM3N2RmYjI1OWQ1OTA5YjM5MDY5MjZkZjU5ZDIxJ0OKSQ==: 00:27:43.227 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWYxZjU2NDBkZTQ0OWI1NTI0NzhhYzZhMTYyODliYTg3NGNmMWZjYjBjYmRlZGEyXvuPMw==: 00:27:43.227 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:43.227 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:43.227 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmI0ZTA0ZWRjNzk1OTFmNWUyODM3N2RmYjI1OWQ1OTA5YjM5MDY5MjZkZjU5ZDIxJ0OKSQ==: 00:27:43.227 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWYxZjU2NDBkZTQ0OWI1NTI0NzhhYzZhMTYyODliYTg3NGNmMWZjYjBjYmRlZGEyXvuPMw==: ]] 00:27:43.227 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWYxZjU2NDBkZTQ0OWI1NTI0NzhhYzZhMTYyODliYTg3NGNmMWZjYjBjYmRlZGEyXvuPMw==: 00:27:43.227 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:27:43.227 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.227 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:43.227 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:43.227 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:43.227 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.227 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:43.227 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.227 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.227 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.227 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.227 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:43.227 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:43.227 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:43.227 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.227 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.227 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:43.227 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.227 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:43.227 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:43.227 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:43.227 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:43.227 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.227 14:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.794 nvme0n1 00:27:43.794 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.794 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.794 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.794 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.794 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.794 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.794 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.794 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.794 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.794 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.794 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.794 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.794 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:27:43.794 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.794 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:43.794 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:43.794 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:43.794 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTk1MWQ5Y2E3MTdkZGU0OTgwZGRjNGM1ZmVlNjJkOGKqN+LY: 00:27:43.794 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmQyYTkxMjg1NzE0ZTVlNjE0MDMwMzI1N2NiNTQzNmKymp7m: 00:27:43.794 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:43.794 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:43.794 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTk1MWQ5Y2E3MTdkZGU0OTgwZGRjNGM1ZmVlNjJkOGKqN+LY: 00:27:43.794 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmQyYTkxMjg1NzE0ZTVlNjE0MDMwMzI1N2NiNTQzNmKymp7m: ]] 00:27:43.794 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmQyYTkxMjg1NzE0ZTVlNjE0MDMwMzI1N2NiNTQzNmKymp7m: 00:27:43.794 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:27:43.794 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.794 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:43.794 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:43.794 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:43.794 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.794 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:43.794 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.794 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.794 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.794 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.794 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:43.794 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:43.794 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:43.794 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.794 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.794 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:43.794 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.794 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:43.794 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:43.794 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:43.794 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:43.794 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.795 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.361 nvme0n1 00:27:44.361 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.361 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.361 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.361 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.361 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.361 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.361 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.361 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.361 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.361 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.361 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.361 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.361 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:27:44.361 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.361 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:44.361 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:44.361 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:44.361 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjI4ZmMxOGQ2MjA0YmI2YjI2NTE0NzgyNzFhNzNmYmZiYTYwYzFkMTM3MDhmOTI0UJyA4A==: 00:27:44.361 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDc1YzYwYWQ0OWFlNTZjNjhhNTJlZmFjMmRkZjE0MzRFixCB: 00:27:44.361 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:44.361 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:44.361 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjI4ZmMxOGQ2MjA0YmI2YjI2NTE0NzgyNzFhNzNmYmZiYTYwYzFkMTM3MDhmOTI0UJyA4A==: 00:27:44.361 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDc1YzYwYWQ0OWFlNTZjNjhhNTJlZmFjMmRkZjE0MzRFixCB: ]] 00:27:44.361 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDc1YzYwYWQ0OWFlNTZjNjhhNTJlZmFjMmRkZjE0MzRFixCB: 00:27:44.361 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:27:44.361 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.361 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:44.361 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:44.361 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:44.361 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.361 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:44.361 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.361 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.361 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.361 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.361 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:44.361 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:44.361 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:44.361 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.361 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.361 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:44.361 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.361 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:44.361 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:44.361 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:44.361 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:44.361 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.361 14:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.928 nvme0n1 00:27:44.928 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.928 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.928 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.928 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.928 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.928 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.928 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.928 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.928 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.928 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.928 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.928 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.928 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:27:44.928 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.928 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:44.928 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:44.928 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:44.928 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWQwZTBmNDZkMWQ5NTk5ZjIwYmEyYmFkNjA0YWI1MWQxMGJmNTU4ZWQ1YTcxNDQ3NTM2YThlYjdiZjZiOGY1N5VSzg4=: 00:27:44.928 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:44.928 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:44.928 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:44.928 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWQwZTBmNDZkMWQ5NTk5ZjIwYmEyYmFkNjA0YWI1MWQxMGJmNTU4ZWQ1YTcxNDQ3NTM2YThlYjdiZjZiOGY1N5VSzg4=: 00:27:44.928 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:44.928 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:27:44.928 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.928 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:44.928 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:44.928 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:44.928 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.928 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:44.928 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.928 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.928 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.928 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.928 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:44.928 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:44.928 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:44.928 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.928 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.928 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:44.928 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.928 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:44.928 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:44.928 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:44.928 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:44.928 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.928 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.186 nvme0n1 00:27:45.187 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.187 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.187 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.187 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.187 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.187 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.446 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.446 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.446 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.446 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.446 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.446 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:45.446 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.446 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:45.446 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:45.446 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:45.446 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmI0ZTA0ZWRjNzk1OTFmNWUyODM3N2RmYjI1OWQ1OTA5YjM5MDY5MjZkZjU5ZDIxJ0OKSQ==: 00:27:45.446 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWYxZjU2NDBkZTQ0OWI1NTI0NzhhYzZhMTYyODliYTg3NGNmMWZjYjBjYmRlZGEyXvuPMw==: 00:27:45.446 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:45.446 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:45.446 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmI0ZTA0ZWRjNzk1OTFmNWUyODM3N2RmYjI1OWQ1OTA5YjM5MDY5MjZkZjU5ZDIxJ0OKSQ==: 00:27:45.446 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWYxZjU2NDBkZTQ0OWI1NTI0NzhhYzZhMTYyODliYTg3NGNmMWZjYjBjYmRlZGEyXvuPMw==: ]] 00:27:45.446 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWYxZjU2NDBkZTQ0OWI1NTI0NzhhYzZhMTYyODliYTg3NGNmMWZjYjBjYmRlZGEyXvuPMw==: 00:27:45.446 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:45.446 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.446 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.446 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.446 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:27:45.446 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:45.446 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:45.446 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:45.446 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.446 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.446 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:45.446 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.446 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:45.446 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:45.446 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:45.446 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:45.446 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:45.446 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:45.446 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:45.446 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:45.446 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:45.446 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:45.446 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:45.446 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.446 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.446 2024/11/06 14:32:12 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:27:45.446 request: 00:27:45.446 { 00:27:45.446 "method": "bdev_nvme_attach_controller", 00:27:45.446 "params": { 00:27:45.446 "name": "nvme0", 00:27:45.446 "trtype": "tcp", 00:27:45.446 "traddr": "10.0.0.1", 00:27:45.446 "adrfam": "ipv4", 00:27:45.446 "trsvcid": "4420", 00:27:45.446 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:45.446 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:45.446 "prchk_reftag": false, 00:27:45.446 "prchk_guard": false, 00:27:45.446 "hdgst": false, 00:27:45.446 "ddgst": false, 00:27:45.446 "allow_unrecognized_csi": false 00:27:45.446 } 00:27:45.446 } 00:27:45.446 Got JSON-RPC error response 00:27:45.446 GoRPCClient: error on JSON-RPC call 00:27:45.446 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:45.446 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:45.446 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:45.446 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:45.446 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:45.446 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.446 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:27:45.446 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.446 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.446 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.446 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:27:45.446 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:27:45.446 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:45.446 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:45.446 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:45.446 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.446 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.446 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:45.446 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.446 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:45.446 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:45.446 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:45.446 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:45.446 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:45.446 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:45.446 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:45.446 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:45.446 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:45.446 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:45.446 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:45.446 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.446 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.446 2024/11/06 14:32:12 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:27:45.446 request: 00:27:45.446 { 00:27:45.446 "method": "bdev_nvme_attach_controller", 00:27:45.446 "params": { 00:27:45.446 "name": "nvme0", 00:27:45.446 "trtype": "tcp", 00:27:45.446 "traddr": "10.0.0.1", 00:27:45.446 "adrfam": "ipv4", 00:27:45.446 "trsvcid": "4420", 00:27:45.446 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:45.446 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:45.446 "prchk_reftag": false, 00:27:45.446 "prchk_guard": false, 00:27:45.446 "hdgst": false, 00:27:45.446 "ddgst": false, 00:27:45.446 "dhchap_key": "key2", 00:27:45.446 "allow_unrecognized_csi": false 00:27:45.446 } 00:27:45.446 } 00:27:45.446 Got JSON-RPC error response 00:27:45.447 GoRPCClient: error on JSON-RPC call 00:27:45.447 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:45.447 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:45.447 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:45.447 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:45.447 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:45.447 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.447 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.447 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:27:45.447 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.447 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.447 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:27:45.447 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:27:45.447 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:45.447 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:45.447 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:45.447 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.447 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.447 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:45.447 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.447 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:45.447 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:45.447 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:45.447 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:45.447 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:45.447 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:45.447 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:45.447 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:45.447 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:45.447 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:45.447 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:45.447 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.447 14:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.447 2024/11/06 14:32:13 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:27:45.447 request: 00:27:45.447 { 00:27:45.447 "method": "bdev_nvme_attach_controller", 00:27:45.447 "params": { 00:27:45.447 "name": "nvme0", 00:27:45.447 "trtype": "tcp", 00:27:45.447 "traddr": "10.0.0.1", 00:27:45.447 "adrfam": "ipv4", 00:27:45.447 "trsvcid": "4420", 00:27:45.447 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:45.447 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:45.447 "prchk_reftag": false, 00:27:45.447 "prchk_guard": false, 00:27:45.447 "hdgst": false, 00:27:45.447 "ddgst": false, 00:27:45.447 "dhchap_key": "key1", 00:27:45.447 "dhchap_ctrlr_key": "ckey2", 00:27:45.447 "allow_unrecognized_csi": false 00:27:45.447 } 00:27:45.447 } 00:27:45.447 Got JSON-RPC error response 00:27:45.447 GoRPCClient: error on JSON-RPC call 00:27:45.447 14:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:45.447 14:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:45.447 14:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:45.447 14:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:45.447 14:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:45.447 14:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:27:45.447 14:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:45.447 14:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:45.447 14:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:45.447 14:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.447 14:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.447 14:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:45.447 14:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.447 14:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:45.447 14:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:45.447 14:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:45.447 14:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:27:45.447 14:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.447 14:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.706 nvme0n1 00:27:45.706 14:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.706 14:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:45.706 14:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.706 14:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:45.706 14:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:45.706 14:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:45.706 14:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTk1MWQ5Y2E3MTdkZGU0OTgwZGRjNGM1ZmVlNjJkOGKqN+LY: 00:27:45.706 14:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmQyYTkxMjg1NzE0ZTVlNjE0MDMwMzI1N2NiNTQzNmKymp7m: 00:27:45.706 14:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:45.706 14:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:45.706 14:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTk1MWQ5Y2E3MTdkZGU0OTgwZGRjNGM1ZmVlNjJkOGKqN+LY: 00:27:45.706 14:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmQyYTkxMjg1NzE0ZTVlNjE0MDMwMzI1N2NiNTQzNmKymp7m: ]] 00:27:45.706 14:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmQyYTkxMjg1NzE0ZTVlNjE0MDMwMzI1N2NiNTQzNmKymp7m: 00:27:45.706 14:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:45.706 14:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.706 14:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.706 14:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.706 14:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:27:45.706 14:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.706 14:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.706 14:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.706 14:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.706 14:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.706 14:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:45.706 14:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:45.706 14:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:45.706 14:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:45.706 14:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:45.706 14:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:45.706 14:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:45.706 14:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:45.706 14:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.706 14:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.706 2024/11/06 14:32:13 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:ckey2 dhchap_key:key1 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:27:45.706 request: 00:27:45.706 { 00:27:45.706 "method": "bdev_nvme_set_keys", 00:27:45.706 "params": { 00:27:45.706 "name": "nvme0", 00:27:45.706 "dhchap_key": "key1", 00:27:45.706 "dhchap_ctrlr_key": "ckey2" 00:27:45.706 } 00:27:45.706 } 00:27:45.706 Got JSON-RPC error response 00:27:45.706 GoRPCClient: error on JSON-RPC call 00:27:45.706 14:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:45.706 14:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:45.706 14:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:45.706 14:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:45.706 14:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:45.706 14:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.706 14:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.706 14:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:45.706 14:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.706 14:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.706 14:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:27:45.706 14:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:27:46.641 14:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:46.641 14:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.641 14:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.641 14:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.641 14:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.641 14:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:27:46.641 14:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:46.641 14:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.641 14:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:46.641 14:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:46.641 14:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:46.641 14:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmI0ZTA0ZWRjNzk1OTFmNWUyODM3N2RmYjI1OWQ1OTA5YjM5MDY5MjZkZjU5ZDIxJ0OKSQ==: 00:27:46.641 14:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWYxZjU2NDBkZTQ0OWI1NTI0NzhhYzZhMTYyODliYTg3NGNmMWZjYjBjYmRlZGEyXvuPMw==: 00:27:46.641 14:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:46.641 14:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:46.641 14:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmI0ZTA0ZWRjNzk1OTFmNWUyODM3N2RmYjI1OWQ1OTA5YjM5MDY5MjZkZjU5ZDIxJ0OKSQ==: 00:27:46.641 14:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWYxZjU2NDBkZTQ0OWI1NTI0NzhhYzZhMTYyODliYTg3NGNmMWZjYjBjYmRlZGEyXvuPMw==: ]] 00:27:46.641 14:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWYxZjU2NDBkZTQ0OWI1NTI0NzhhYzZhMTYyODliYTg3NGNmMWZjYjBjYmRlZGEyXvuPMw==: 00:27:46.641 14:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:27:46.641 14:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:46.641 14:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:46.641 14:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:46.641 14:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.641 14:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.641 14:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:46.641 14:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.641 14:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:46.641 14:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:46.641 14:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:46.641 14:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:27:46.641 14:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.641 14:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.900 nvme0n1 00:27:46.900 14:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.900 14:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:46.900 14:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.900 14:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:46.900 14:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:46.900 14:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:46.900 14:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTk1MWQ5Y2E3MTdkZGU0OTgwZGRjNGM1ZmVlNjJkOGKqN+LY: 00:27:46.900 14:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmQyYTkxMjg1NzE0ZTVlNjE0MDMwMzI1N2NiNTQzNmKymp7m: 00:27:46.900 14:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:46.900 14:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:46.900 14:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTk1MWQ5Y2E3MTdkZGU0OTgwZGRjNGM1ZmVlNjJkOGKqN+LY: 00:27:46.900 14:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmQyYTkxMjg1NzE0ZTVlNjE0MDMwMzI1N2NiNTQzNmKymp7m: ]] 00:27:46.900 14:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmQyYTkxMjg1NzE0ZTVlNjE0MDMwMzI1N2NiNTQzNmKymp7m: 00:27:46.900 14:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:46.900 14:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:46.900 14:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:46.900 14:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:46.900 14:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:46.900 14:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:46.900 14:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:46.900 14:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:46.900 14:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.900 14:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.900 2024/11/06 14:32:14 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:ckey1 dhchap_key:key2 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:27:46.900 request: 00:27:46.900 { 00:27:46.900 "method": "bdev_nvme_set_keys", 00:27:46.900 "params": { 00:27:46.900 "name": "nvme0", 00:27:46.900 "dhchap_key": "key2", 00:27:46.900 "dhchap_ctrlr_key": "ckey1" 00:27:46.900 } 00:27:46.900 } 00:27:46.900 Got JSON-RPC error response 00:27:46.900 GoRPCClient: error on JSON-RPC call 00:27:46.900 14:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:46.900 14:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:46.900 14:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:46.900 14:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:46.900 14:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:46.900 14:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.900 14:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.900 14:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.900 14:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:27:46.900 14:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.900 14:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:27:46.900 14:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:27:47.835 14:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.835 14:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:27:47.835 14:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.835 14:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.835 14:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.835 14:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:27:47.835 14:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:27:47.835 14:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:27:47.835 14:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:27:47.835 14:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:47.835 14:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:27:47.835 14:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:47.835 14:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:27:47.835 14:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:47.835 14:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:47.835 rmmod nvme_tcp 00:27:47.835 rmmod nvme_fabrics 00:27:48.093 14:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:48.093 14:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:27:48.093 14:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:27:48.093 14:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 101241 ']' 00:27:48.093 14:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 101241 00:27:48.093 14:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@952 -- # '[' -z 101241 ']' 00:27:48.093 14:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # kill -0 101241 00:27:48.093 14:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # uname 00:27:48.093 14:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:48.093 14:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 101241 00:27:48.093 14:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:48.093 14:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:48.093 14:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 101241' 00:27:48.093 killing process with pid 101241 00:27:48.093 14:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@971 -- # kill 101241 00:27:48.093 14:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@976 -- # wait 101241 00:27:48.665 14:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:48.665 14:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:48.665 14:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:48.665 14:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:27:48.665 14:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:27:48.665 14:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:48.665 14:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:27:48.665 14:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:48.665 14:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:27:48.665 14:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:27:48.665 14:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:27:48.665 14:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:27:48.665 14:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:27:48.665 14:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:27:48.665 14:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:27:48.665 14:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:27:48.665 14:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:27:48.665 14:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:27:48.665 14:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:27:48.665 14:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:27:48.665 14:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:48.665 14:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:48.665 14:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:27:48.665 14:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:48.665 14:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:48.665 14:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:48.665 14:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:27:48.665 14:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:48.665 14:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:48.665 14:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:27:48.665 14:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:27:48.665 14:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:27:48.665 14:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:48.665 14:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:48.665 14:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:48.665 14:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:48.665 14:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:27:48.665 14:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:27:48.971 14:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:27:49.232 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:49.491 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:27:49.491 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:27:49.491 14:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.dmE /tmp/spdk.key-null.ZNL /tmp/spdk.key-sha256.5ym /tmp/spdk.key-sha384.2By /tmp/spdk.key-sha512.De3 /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:27:49.491 14:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:27:49.752 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:49.752 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:27:49.752 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:27:50.013 00:27:50.013 real 0m36.513s 00:27:50.013 user 0m29.964s 00:27:50.013 sys 0m3.217s 00:27:50.013 14:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:50.013 ************************************ 00:27:50.013 END TEST nvmf_auth_host 00:27:50.013 ************************************ 00:27:50.013 14:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.013 14:32:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:27:50.013 14:32:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:50.013 14:32:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:27:50.013 14:32:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:50.013 14:32:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.013 ************************************ 00:27:50.013 START TEST nvmf_digest 00:27:50.013 ************************************ 00:27:50.013 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:50.013 * Looking for test storage... 00:27:50.013 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:27:50.013 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:50.013 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lcov --version 00:27:50.013 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:50.013 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:50.013 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:50.013 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:50.013 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:50.013 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:27:50.013 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:27:50.013 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:27:50.013 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:27:50.013 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:27:50.013 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:27:50.013 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:27:50.013 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:50.013 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:27:50.013 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:27:50.013 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:50.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:50.014 --rc genhtml_branch_coverage=1 00:27:50.014 --rc genhtml_function_coverage=1 00:27:50.014 --rc genhtml_legend=1 00:27:50.014 --rc geninfo_all_blocks=1 00:27:50.014 --rc geninfo_unexecuted_blocks=1 00:27:50.014 00:27:50.014 ' 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:50.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:50.014 --rc genhtml_branch_coverage=1 00:27:50.014 --rc genhtml_function_coverage=1 00:27:50.014 --rc genhtml_legend=1 00:27:50.014 --rc geninfo_all_blocks=1 00:27:50.014 --rc geninfo_unexecuted_blocks=1 00:27:50.014 00:27:50.014 ' 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:50.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:50.014 --rc genhtml_branch_coverage=1 00:27:50.014 --rc genhtml_function_coverage=1 00:27:50.014 --rc genhtml_legend=1 00:27:50.014 --rc geninfo_all_blocks=1 00:27:50.014 --rc geninfo_unexecuted_blocks=1 00:27:50.014 00:27:50.014 ' 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:50.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:50.014 --rc genhtml_branch_coverage=1 00:27:50.014 --rc genhtml_function_coverage=1 00:27:50.014 --rc genhtml_legend=1 00:27:50.014 --rc geninfo_all_blocks=1 00:27:50.014 --rc geninfo_unexecuted_blocks=1 00:27:50.014 00:27:50.014 ' 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:50.014 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@460 -- # nvmf_veth_init 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:50.014 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:27:50.015 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:50.015 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:50.015 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:50.015 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:50.015 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:50.015 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:50.015 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:27:50.015 Cannot find device "nvmf_init_br" 00:27:50.015 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:27:50.015 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:27:50.276 Cannot find device "nvmf_init_br2" 00:27:50.276 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:27:50.276 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:27:50.276 Cannot find device "nvmf_tgt_br" 00:27:50.276 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:27:50.276 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:27:50.276 Cannot find device "nvmf_tgt_br2" 00:27:50.276 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:27:50.276 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:27:50.276 Cannot find device "nvmf_init_br" 00:27:50.276 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:27:50.276 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:27:50.276 Cannot find device "nvmf_init_br2" 00:27:50.276 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:27:50.276 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:27:50.276 Cannot find device "nvmf_tgt_br" 00:27:50.276 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:27:50.276 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:27:50.276 Cannot find device "nvmf_tgt_br2" 00:27:50.276 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:27:50.276 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:27:50.276 Cannot find device "nvmf_br" 00:27:50.276 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:27:50.276 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:27:50.276 Cannot find device "nvmf_init_if" 00:27:50.276 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:27:50.276 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:27:50.276 Cannot find device "nvmf_init_if2" 00:27:50.276 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:27:50.276 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:50.276 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:50.276 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:27:50.276 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:50.276 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:50.276 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:27:50.276 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:27:50.277 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:50.277 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:27:50.277 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:50.277 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:50.277 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:50.277 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:50.277 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:50.277 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:27:50.277 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:27:50.277 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:27:50.277 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:27:50.277 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:27:50.277 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:27:50.277 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:27:50.277 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:27:50.277 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:27:50.277 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:50.277 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:50.277 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:50.277 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:27:50.277 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:27:50.277 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:27:50.277 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:27:50.277 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:50.277 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:50.539 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:50.539 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:27:50.539 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:27:50.539 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:27:50.539 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:50.539 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:27:50.539 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:27:50.539 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:50.539 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:27:50.539 00:27:50.539 --- 10.0.0.3 ping statistics --- 00:27:50.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:50.539 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:27:50.539 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:27:50.539 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:27:50.539 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:27:50.539 00:27:50.539 --- 10.0.0.4 ping statistics --- 00:27:50.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:50.539 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:27:50.539 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:50.539 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:50.539 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.016 ms 00:27:50.539 00:27:50.539 --- 10.0.0.1 ping statistics --- 00:27:50.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:50.539 rtt min/avg/max/mdev = 0.016/0.016/0.016/0.000 ms 00:27:50.539 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:27:50.539 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:50.539 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.030 ms 00:27:50.539 00:27:50.539 --- 10.0.0.2 ping statistics --- 00:27:50.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:50.539 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:27:50.539 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:50.539 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@461 -- # return 0 00:27:50.539 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:50.539 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:50.539 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:50.539 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:50.539 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:50.539 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:50.539 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:50.539 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:50.539 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:27:50.539 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:27:50.539 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:27:50.539 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:50.539 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:50.539 ************************************ 00:27:50.539 START TEST nvmf_digest_clean 00:27:50.539 ************************************ 00:27:50.539 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1127 -- # run_digest 00:27:50.539 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:27:50.539 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:27:50.539 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:27:50.539 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:27:50.539 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:27:50.539 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:50.539 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:50.539 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:50.539 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=102912 00:27:50.539 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 102912 00:27:50.539 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 102912 ']' 00:27:50.539 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:50.539 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:50.539 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:50.539 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:50.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:50.539 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:50.539 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:50.539 [2024-11-06 14:32:18.049630] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:27:50.539 [2024-11-06 14:32:18.049735] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:50.800 [2024-11-06 14:32:18.205721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:50.800 [2024-11-06 14:32:18.300733] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:50.800 [2024-11-06 14:32:18.300776] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:50.800 [2024-11-06 14:32:18.300788] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:50.800 [2024-11-06 14:32:18.300802] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:50.800 [2024-11-06 14:32:18.300812] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:50.800 [2024-11-06 14:32:18.301673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:51.373 14:32:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:51.373 14:32:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:27:51.373 14:32:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:51.373 14:32:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:51.373 14:32:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:51.373 14:32:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:51.373 14:32:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:27:51.373 14:32:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:27:51.373 14:32:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:27:51.373 14:32:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.373 14:32:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:51.634 null0 00:27:51.634 [2024-11-06 14:32:19.170220] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:51.634 [2024-11-06 14:32:19.194314] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:51.634 14:32:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.634 14:32:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:27:51.634 14:32:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:51.634 14:32:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:51.634 14:32:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:51.634 14:32:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:51.634 14:32:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:51.634 14:32:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:51.635 14:32:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=102962 00:27:51.635 14:32:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 102962 /var/tmp/bperf.sock 00:27:51.635 14:32:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 102962 ']' 00:27:51.635 14:32:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:51.635 14:32:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:51.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:51.635 14:32:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:51.635 14:32:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:51.635 14:32:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:51.635 14:32:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:51.896 [2024-11-06 14:32:19.271690] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:27:51.896 [2024-11-06 14:32:19.271802] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102962 ] 00:27:51.896 [2024-11-06 14:32:19.431172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:51.896 [2024-11-06 14:32:19.528563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:52.836 14:32:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:52.836 14:32:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:27:52.836 14:32:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:52.836 14:32:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:52.836 14:32:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:53.095 14:32:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:53.095 14:32:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:53.353 nvme0n1 00:27:53.353 14:32:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:53.353 14:32:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:53.353 Running I/O for 2 seconds... 00:27:55.222 19119.00 IOPS, 74.68 MiB/s [2024-11-06T14:32:22.857Z] 20042.50 IOPS, 78.29 MiB/s 00:27:55.222 Latency(us) 00:27:55.222 [2024-11-06T14:32:22.857Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:55.222 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:55.222 nvme0n1 : 2.00 20061.48 78.37 0.00 0.00 6374.24 3705.30 21273.99 00:27:55.222 [2024-11-06T14:32:22.857Z] =================================================================================================================== 00:27:55.222 [2024-11-06T14:32:22.857Z] Total : 20061.48 78.37 0.00 0.00 6374.24 3705.30 21273.99 00:27:55.222 { 00:27:55.222 "results": [ 00:27:55.222 { 00:27:55.222 "job": "nvme0n1", 00:27:55.222 "core_mask": "0x2", 00:27:55.222 "workload": "randread", 00:27:55.222 "status": "finished", 00:27:55.222 "queue_depth": 128, 00:27:55.222 "io_size": 4096, 00:27:55.222 "runtime": 2.004488, 00:27:55.222 "iops": 20061.482034315, 00:27:55.222 "mibps": 78.36516419654296, 00:27:55.222 "io_failed": 0, 00:27:55.222 "io_timeout": 0, 00:27:55.222 "avg_latency_us": 6374.23564687271, 00:27:55.222 "min_latency_us": 3705.3046153846153, 00:27:55.222 "max_latency_us": 21273.993846153848 00:27:55.222 } 00:27:55.222 ], 00:27:55.222 "core_count": 1 00:27:55.222 } 00:27:55.480 14:32:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:55.480 14:32:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:55.480 14:32:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:55.480 14:32:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:55.480 | select(.opcode=="crc32c") 00:27:55.480 | "\(.module_name) \(.executed)"' 00:27:55.480 14:32:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:55.480 14:32:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:55.480 14:32:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:55.481 14:32:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:55.481 14:32:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:55.481 14:32:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 102962 00:27:55.481 14:32:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 102962 ']' 00:27:55.481 14:32:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 102962 00:27:55.481 14:32:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:27:55.481 14:32:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:55.481 14:32:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 102962 00:27:55.481 killing process with pid 102962 00:27:55.481 14:32:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:27:55.481 14:32:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:27:55.481 14:32:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 102962' 00:27:55.481 Received shutdown signal, test time was about 2.000000 seconds 00:27:55.481 00:27:55.481 Latency(us) 00:27:55.481 [2024-11-06T14:32:23.116Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:55.481 [2024-11-06T14:32:23.116Z] =================================================================================================================== 00:27:55.481 [2024-11-06T14:32:23.116Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:55.481 14:32:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 102962 00:27:55.481 14:32:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 102962 00:27:56.047 14:32:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:27:56.047 14:32:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:56.047 14:32:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:56.047 14:32:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:56.047 14:32:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:27:56.047 14:32:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:27:56.047 14:32:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:56.047 14:32:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=103054 00:27:56.047 14:32:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 103054 /var/tmp/bperf.sock 00:27:56.047 14:32:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 103054 ']' 00:27:56.047 14:32:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:56.047 14:32:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:56.047 14:32:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:56.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:56.047 14:32:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:56.047 14:32:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:56.047 14:32:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:56.047 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:56.047 Zero copy mechanism will not be used. 00:27:56.047 [2024-11-06 14:32:23.615337] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:27:56.047 [2024-11-06 14:32:23.615449] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103054 ] 00:27:56.306 [2024-11-06 14:32:23.774618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:56.306 [2024-11-06 14:32:23.868070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:56.872 14:32:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:56.872 14:32:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:27:56.872 14:32:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:56.872 14:32:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:56.872 14:32:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:57.437 14:32:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:57.438 14:32:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:57.438 nvme0n1 00:27:57.438 14:32:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:57.438 14:32:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:57.695 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:57.695 Zero copy mechanism will not be used. 00:27:57.695 Running I/O for 2 seconds... 00:27:59.589 9826.00 IOPS, 1228.25 MiB/s [2024-11-06T14:32:27.224Z] 9753.00 IOPS, 1219.12 MiB/s 00:27:59.589 Latency(us) 00:27:59.589 [2024-11-06T14:32:27.224Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:59.589 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:59.589 nvme0n1 : 2.00 9750.07 1218.76 0.00 0.00 1638.41 472.62 3251.59 00:27:59.589 [2024-11-06T14:32:27.224Z] =================================================================================================================== 00:27:59.589 [2024-11-06T14:32:27.224Z] Total : 9750.07 1218.76 0.00 0.00 1638.41 472.62 3251.59 00:27:59.589 { 00:27:59.589 "results": [ 00:27:59.589 { 00:27:59.589 "job": "nvme0n1", 00:27:59.589 "core_mask": "0x2", 00:27:59.589 "workload": "randread", 00:27:59.589 "status": "finished", 00:27:59.589 "queue_depth": 16, 00:27:59.589 "io_size": 131072, 00:27:59.589 "runtime": 2.002242, 00:27:59.589 "iops": 9750.07017133793, 00:27:59.589 "mibps": 1218.7587714172412, 00:27:59.589 "io_failed": 0, 00:27:59.589 "io_timeout": 0, 00:27:59.589 "avg_latency_us": 1638.413153759467, 00:27:59.589 "min_latency_us": 472.61538461538464, 00:27:59.589 "max_latency_us": 3251.5938461538462 00:27:59.589 } 00:27:59.589 ], 00:27:59.589 "core_count": 1 00:27:59.589 } 00:27:59.589 14:32:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:59.589 14:32:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:59.589 14:32:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:59.589 14:32:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:59.589 14:32:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:59.589 | select(.opcode=="crc32c") 00:27:59.589 | "\(.module_name) \(.executed)"' 00:27:59.847 14:32:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:59.847 14:32:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:59.847 14:32:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:59.847 14:32:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:59.847 14:32:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 103054 00:27:59.847 14:32:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 103054 ']' 00:27:59.847 14:32:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 103054 00:27:59.847 14:32:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:27:59.847 14:32:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:59.847 14:32:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 103054 00:27:59.847 14:32:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:27:59.847 14:32:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:27:59.847 14:32:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 103054' 00:27:59.847 killing process with pid 103054 00:27:59.847 14:32:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 103054 00:27:59.847 Received shutdown signal, test time was about 2.000000 seconds 00:27:59.847 00:27:59.847 Latency(us) 00:27:59.847 [2024-11-06T14:32:27.482Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:59.847 [2024-11-06T14:32:27.482Z] =================================================================================================================== 00:27:59.847 [2024-11-06T14:32:27.482Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:59.847 14:32:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 103054 00:28:00.413 14:32:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:28:00.413 14:32:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:00.413 14:32:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:00.413 14:32:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:00.413 14:32:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:00.413 14:32:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:00.413 14:32:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:00.413 14:32:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=103145 00:28:00.413 14:32:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 103145 /var/tmp/bperf.sock 00:28:00.413 14:32:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:00.413 14:32:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 103145 ']' 00:28:00.413 14:32:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:00.413 14:32:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:00.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:00.413 14:32:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:00.413 14:32:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:00.413 14:32:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:00.413 [2024-11-06 14:32:28.016973] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:28:00.413 [2024-11-06 14:32:28.017086] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103145 ] 00:28:00.671 [2024-11-06 14:32:28.174069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:00.671 [2024-11-06 14:32:28.269404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:01.237 14:32:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:01.237 14:32:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:28:01.237 14:32:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:01.237 14:32:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:01.237 14:32:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:01.802 14:32:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:01.802 14:32:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:02.060 nvme0n1 00:28:02.060 14:32:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:02.060 14:32:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:02.060 Running I/O for 2 seconds... 00:28:04.369 26727.00 IOPS, 104.40 MiB/s [2024-11-06T14:32:32.004Z] 26868.00 IOPS, 104.95 MiB/s 00:28:04.369 Latency(us) 00:28:04.369 [2024-11-06T14:32:32.004Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:04.369 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:04.369 nvme0n1 : 2.01 26887.79 105.03 0.00 0.00 4756.31 2495.41 14417.92 00:28:04.369 [2024-11-06T14:32:32.004Z] =================================================================================================================== 00:28:04.369 [2024-11-06T14:32:32.004Z] Total : 26887.79 105.03 0.00 0.00 4756.31 2495.41 14417.92 00:28:04.369 { 00:28:04.369 "results": [ 00:28:04.369 { 00:28:04.369 "job": "nvme0n1", 00:28:04.369 "core_mask": "0x2", 00:28:04.369 "workload": "randwrite", 00:28:04.369 "status": "finished", 00:28:04.369 "queue_depth": 128, 00:28:04.369 "io_size": 4096, 00:28:04.369 "runtime": 2.008012, 00:28:04.369 "iops": 26887.78752318213, 00:28:04.369 "mibps": 105.0304200124302, 00:28:04.369 "io_failed": 0, 00:28:04.369 "io_timeout": 0, 00:28:04.370 "avg_latency_us": 4756.307298851802, 00:28:04.370 "min_latency_us": 2495.409230769231, 00:28:04.370 "max_latency_us": 14417.92 00:28:04.370 } 00:28:04.370 ], 00:28:04.370 "core_count": 1 00:28:04.370 } 00:28:04.370 14:32:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:04.370 14:32:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:04.370 14:32:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:04.370 14:32:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:04.370 14:32:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:04.370 | select(.opcode=="crc32c") 00:28:04.370 | "\(.module_name) \(.executed)"' 00:28:04.370 14:32:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:04.370 14:32:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:04.370 14:32:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:04.370 14:32:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:04.370 14:32:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 103145 00:28:04.370 14:32:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 103145 ']' 00:28:04.370 14:32:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 103145 00:28:04.370 14:32:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:28:04.370 14:32:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:04.370 14:32:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 103145 00:28:04.370 14:32:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:04.370 14:32:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:04.370 14:32:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 103145' 00:28:04.370 killing process with pid 103145 00:28:04.370 14:32:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 103145 00:28:04.370 Received shutdown signal, test time was about 2.000000 seconds 00:28:04.370 00:28:04.370 Latency(us) 00:28:04.370 [2024-11-06T14:32:32.005Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:04.370 [2024-11-06T14:32:32.005Z] =================================================================================================================== 00:28:04.370 [2024-11-06T14:32:32.005Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:04.370 14:32:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 103145 00:28:04.936 14:32:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:28:04.936 14:32:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:04.936 14:32:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:04.936 14:32:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:04.936 14:32:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:04.936 14:32:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:04.936 14:32:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:04.936 14:32:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=103237 00:28:04.936 14:32:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 103237 /var/tmp/bperf.sock 00:28:04.936 14:32:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 103237 ']' 00:28:04.936 14:32:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:04.936 14:32:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:04.936 14:32:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:04.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:04.936 14:32:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:04.936 14:32:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:04.936 14:32:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:04.936 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:04.936 Zero copy mechanism will not be used. 00:28:04.936 [2024-11-06 14:32:32.387070] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:28:04.936 [2024-11-06 14:32:32.387182] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103237 ] 00:28:04.936 [2024-11-06 14:32:32.542171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:05.194 [2024-11-06 14:32:32.621529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:05.761 14:32:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:05.761 14:32:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:28:05.761 14:32:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:05.761 14:32:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:05.761 14:32:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:06.019 14:32:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:06.019 14:32:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:06.277 nvme0n1 00:28:06.277 14:32:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:06.277 14:32:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:06.535 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:06.535 Zero copy mechanism will not be used. 00:28:06.535 Running I/O for 2 seconds... 00:28:08.401 11620.00 IOPS, 1452.50 MiB/s [2024-11-06T14:32:36.036Z] 11703.50 IOPS, 1462.94 MiB/s 00:28:08.401 Latency(us) 00:28:08.401 [2024-11-06T14:32:36.036Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:08.401 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:08.401 nvme0n1 : 2.00 11698.08 1462.26 0.00 0.00 1364.60 1058.66 7259.37 00:28:08.401 [2024-11-06T14:32:36.036Z] =================================================================================================================== 00:28:08.401 [2024-11-06T14:32:36.036Z] Total : 11698.08 1462.26 0.00 0.00 1364.60 1058.66 7259.37 00:28:08.401 { 00:28:08.401 "results": [ 00:28:08.401 { 00:28:08.401 "job": "nvme0n1", 00:28:08.401 "core_mask": "0x2", 00:28:08.401 "workload": "randwrite", 00:28:08.401 "status": "finished", 00:28:08.401 "queue_depth": 16, 00:28:08.401 "io_size": 131072, 00:28:08.401 "runtime": 2.002808, 00:28:08.401 "iops": 11698.075901434386, 00:28:08.401 "mibps": 1462.2594876792982, 00:28:08.401 "io_failed": 0, 00:28:08.401 "io_timeout": 0, 00:28:08.401 "avg_latency_us": 1364.5977638495356, 00:28:08.401 "min_latency_us": 1058.6584615384616, 00:28:08.401 "max_latency_us": 7259.372307692308 00:28:08.401 } 00:28:08.401 ], 00:28:08.401 "core_count": 1 00:28:08.401 } 00:28:08.401 14:32:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:08.401 14:32:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:08.401 14:32:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:08.401 14:32:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:08.401 | select(.opcode=="crc32c") 00:28:08.401 | "\(.module_name) \(.executed)"' 00:28:08.401 14:32:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:08.660 14:32:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:08.660 14:32:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:08.660 14:32:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:08.660 14:32:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:08.660 14:32:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 103237 00:28:08.660 14:32:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 103237 ']' 00:28:08.660 14:32:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 103237 00:28:08.660 14:32:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:28:08.660 14:32:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:08.660 14:32:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 103237 00:28:08.660 killing process with pid 103237 00:28:08.660 14:32:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:08.660 14:32:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:08.660 14:32:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 103237' 00:28:08.660 14:32:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 103237 00:28:08.660 Received shutdown signal, test time was about 2.000000 seconds 00:28:08.660 00:28:08.660 Latency(us) 00:28:08.660 [2024-11-06T14:32:36.295Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:08.660 [2024-11-06T14:32:36.295Z] =================================================================================================================== 00:28:08.660 [2024-11-06T14:32:36.295Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:08.660 14:32:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 103237 00:28:09.226 14:32:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 102912 00:28:09.226 14:32:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 102912 ']' 00:28:09.226 14:32:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 102912 00:28:09.226 14:32:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:28:09.226 14:32:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:09.226 14:32:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 102912 00:28:09.226 14:32:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:09.226 14:32:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:09.226 14:32:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 102912' 00:28:09.226 killing process with pid 102912 00:28:09.226 14:32:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 102912 00:28:09.226 14:32:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 102912 00:28:09.792 00:28:09.792 real 0m19.371s 00:28:09.792 user 0m36.880s 00:28:09.792 sys 0m3.614s 00:28:09.792 14:32:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:09.792 14:32:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:09.792 ************************************ 00:28:09.792 END TEST nvmf_digest_clean 00:28:09.792 ************************************ 00:28:09.792 14:32:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:28:09.792 14:32:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:28:09.792 14:32:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:09.792 14:32:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:09.792 ************************************ 00:28:09.792 START TEST nvmf_digest_error 00:28:09.792 ************************************ 00:28:09.792 14:32:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1127 -- # run_digest_error 00:28:09.792 14:32:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:28:09.792 14:32:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:09.792 14:32:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:09.792 14:32:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:09.792 14:32:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=103358 00:28:09.792 14:32:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 103358 00:28:09.792 14:32:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 103358 ']' 00:28:09.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:09.792 14:32:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:09.792 14:32:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:09.792 14:32:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:09.792 14:32:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:09.792 14:32:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:09.792 14:32:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:10.050 [2024-11-06 14:32:37.449470] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:28:10.050 [2024-11-06 14:32:37.449556] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:10.050 [2024-11-06 14:32:37.599048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:10.050 [2024-11-06 14:32:37.679342] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:10.050 [2024-11-06 14:32:37.679375] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:10.050 [2024-11-06 14:32:37.679383] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:10.050 [2024-11-06 14:32:37.679395] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:10.050 [2024-11-06 14:32:37.679404] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:10.050 [2024-11-06 14:32:37.680073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:10.983 14:32:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:10.983 14:32:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:28:10.983 14:32:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:10.983 14:32:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:10.983 14:32:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:10.983 14:32:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:10.983 14:32:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:28:10.983 14:32:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.983 14:32:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:10.983 [2024-11-06 14:32:38.332600] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:28:10.983 14:32:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.983 14:32:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:28:10.983 14:32:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:28:10.983 14:32:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.983 14:32:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:10.983 null0 00:28:10.983 [2024-11-06 14:32:38.526513] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:10.983 [2024-11-06 14:32:38.550620] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:28:10.983 14:32:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.983 14:32:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:28:10.984 14:32:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:10.984 14:32:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:10.984 14:32:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:10.984 14:32:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:10.984 14:32:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=103402 00:28:10.984 14:32:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 103402 /var/tmp/bperf.sock 00:28:10.984 14:32:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 103402 ']' 00:28:10.984 14:32:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:10.984 14:32:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:28:10.984 14:32:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:10.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:10.984 14:32:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:10.984 14:32:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:10.984 14:32:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:11.242 [2024-11-06 14:32:38.626666] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:28:11.242 [2024-11-06 14:32:38.626780] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103402 ] 00:28:11.242 [2024-11-06 14:32:38.784439] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:11.499 [2024-11-06 14:32:38.880075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:12.073 14:32:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:12.073 14:32:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:28:12.073 14:32:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:12.073 14:32:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:12.333 14:32:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:12.333 14:32:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.333 14:32:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:12.333 14:32:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.333 14:32:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:12.333 14:32:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:12.590 nvme0n1 00:28:12.590 14:32:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:12.590 14:32:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.590 14:32:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:12.590 14:32:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.590 14:32:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:12.590 14:32:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:12.590 Running I/O for 2 seconds... 00:28:12.590 [2024-11-06 14:32:40.115656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:12.590 [2024-11-06 14:32:40.115711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.590 [2024-11-06 14:32:40.115727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.590 [2024-11-06 14:32:40.128593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:12.590 [2024-11-06 14:32:40.128634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.590 [2024-11-06 14:32:40.128646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.590 [2024-11-06 14:32:40.141236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:12.590 [2024-11-06 14:32:40.141276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.590 [2024-11-06 14:32:40.141288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.590 [2024-11-06 14:32:40.154373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:12.590 [2024-11-06 14:32:40.154412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.590 [2024-11-06 14:32:40.154424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.590 [2024-11-06 14:32:40.167574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:12.590 [2024-11-06 14:32:40.167611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.590 [2024-11-06 14:32:40.167622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.590 [2024-11-06 14:32:40.181056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:12.590 [2024-11-06 14:32:40.181093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.590 [2024-11-06 14:32:40.181104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.590 [2024-11-06 14:32:40.192400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:12.590 [2024-11-06 14:32:40.192435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:8913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.590 [2024-11-06 14:32:40.192445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.590 [2024-11-06 14:32:40.206207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:12.590 [2024-11-06 14:32:40.206243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.590 [2024-11-06 14:32:40.206254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.590 [2024-11-06 14:32:40.218897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:12.590 [2024-11-06 14:32:40.218934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:6950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.590 [2024-11-06 14:32:40.218945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.848 [2024-11-06 14:32:40.231334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:12.848 [2024-11-06 14:32:40.231371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:11646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.848 [2024-11-06 14:32:40.231382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.848 [2024-11-06 14:32:40.244179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:12.848 [2024-11-06 14:32:40.244216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:16990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.848 [2024-11-06 14:32:40.244226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.848 [2024-11-06 14:32:40.256655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:12.848 [2024-11-06 14:32:40.256690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:3852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.848 [2024-11-06 14:32:40.256701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.848 [2024-11-06 14:32:40.270100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:12.848 [2024-11-06 14:32:40.270136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:3255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.848 [2024-11-06 14:32:40.270146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.848 [2024-11-06 14:32:40.282604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:12.848 [2024-11-06 14:32:40.282640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:10235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.848 [2024-11-06 14:32:40.282650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.848 [2024-11-06 14:32:40.295609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:12.848 [2024-11-06 14:32:40.295645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:19120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.848 [2024-11-06 14:32:40.295656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.848 [2024-11-06 14:32:40.308137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:12.848 [2024-11-06 14:32:40.308171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.848 [2024-11-06 14:32:40.308182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.848 [2024-11-06 14:32:40.321127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:12.848 [2024-11-06 14:32:40.321165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:19510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.848 [2024-11-06 14:32:40.321175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.848 [2024-11-06 14:32:40.333316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:12.848 [2024-11-06 14:32:40.333353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:23198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.848 [2024-11-06 14:32:40.333363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.848 [2024-11-06 14:32:40.345933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:12.848 [2024-11-06 14:32:40.345980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:14039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.848 [2024-11-06 14:32:40.345991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.848 [2024-11-06 14:32:40.358652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:12.848 [2024-11-06 14:32:40.358688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:11001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.849 [2024-11-06 14:32:40.358699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.849 [2024-11-06 14:32:40.371695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:12.849 [2024-11-06 14:32:40.371732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:13038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.849 [2024-11-06 14:32:40.371742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.849 [2024-11-06 14:32:40.384458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:12.849 [2024-11-06 14:32:40.384493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:14942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.849 [2024-11-06 14:32:40.384503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.849 [2024-11-06 14:32:40.399176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:12.849 [2024-11-06 14:32:40.399212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:9173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.849 [2024-11-06 14:32:40.399222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.849 [2024-11-06 14:32:40.411693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:12.849 [2024-11-06 14:32:40.411730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:2570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.849 [2024-11-06 14:32:40.411740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.849 [2024-11-06 14:32:40.423904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:12.849 [2024-11-06 14:32:40.423943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:1817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.849 [2024-11-06 14:32:40.423966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.849 [2024-11-06 14:32:40.436380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:12.849 [2024-11-06 14:32:40.436415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:20640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.849 [2024-11-06 14:32:40.436426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.849 [2024-11-06 14:32:40.449136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:12.849 [2024-11-06 14:32:40.449172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:11934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.849 [2024-11-06 14:32:40.449183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.849 [2024-11-06 14:32:40.461765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:12.849 [2024-11-06 14:32:40.461801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:23002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.849 [2024-11-06 14:32:40.461812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.849 [2024-11-06 14:32:40.474635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:12.849 [2024-11-06 14:32:40.474672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:9694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.849 [2024-11-06 14:32:40.474682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.106 [2024-11-06 14:32:40.487408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.106 [2024-11-06 14:32:40.487444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:24889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.106 [2024-11-06 14:32:40.487454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.106 [2024-11-06 14:32:40.500522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.106 [2024-11-06 14:32:40.500557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:16912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.106 [2024-11-06 14:32:40.500568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.106 [2024-11-06 14:32:40.513394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.106 [2024-11-06 14:32:40.513430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:11190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.106 [2024-11-06 14:32:40.513440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.106 [2024-11-06 14:32:40.526271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.106 [2024-11-06 14:32:40.526307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:11608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.106 [2024-11-06 14:32:40.526318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.106 [2024-11-06 14:32:40.538721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.106 [2024-11-06 14:32:40.538757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.106 [2024-11-06 14:32:40.538767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.106 [2024-11-06 14:32:40.551618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.106 [2024-11-06 14:32:40.551654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:12542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.106 [2024-11-06 14:32:40.551665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.106 [2024-11-06 14:32:40.563914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.106 [2024-11-06 14:32:40.563961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:16476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.106 [2024-11-06 14:32:40.563972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.106 [2024-11-06 14:32:40.576669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.106 [2024-11-06 14:32:40.576705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:23186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.106 [2024-11-06 14:32:40.576715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.106 [2024-11-06 14:32:40.589343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.106 [2024-11-06 14:32:40.589378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:12537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.106 [2024-11-06 14:32:40.589388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.106 [2024-11-06 14:32:40.601785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.106 [2024-11-06 14:32:40.601821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:8233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.106 [2024-11-06 14:32:40.601831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.106 [2024-11-06 14:32:40.614782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.106 [2024-11-06 14:32:40.614819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:10745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.106 [2024-11-06 14:32:40.614829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.106 [2024-11-06 14:32:40.627623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.106 [2024-11-06 14:32:40.627659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:7044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.106 [2024-11-06 14:32:40.627670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.106 [2024-11-06 14:32:40.640194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.106 [2024-11-06 14:32:40.640232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:20985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.106 [2024-11-06 14:32:40.640243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.106 [2024-11-06 14:32:40.653145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.106 [2024-11-06 14:32:40.653180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:7235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.106 [2024-11-06 14:32:40.653190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.106 [2024-11-06 14:32:40.665851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.106 [2024-11-06 14:32:40.665887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:21036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.106 [2024-11-06 14:32:40.665898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.106 [2024-11-06 14:32:40.678637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.107 [2024-11-06 14:32:40.678675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:23831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.107 [2024-11-06 14:32:40.678685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.107 [2024-11-06 14:32:40.691303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.107 [2024-11-06 14:32:40.691339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:1288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.107 [2024-11-06 14:32:40.691350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.107 [2024-11-06 14:32:40.704009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.107 [2024-11-06 14:32:40.704043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:16366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.107 [2024-11-06 14:32:40.704054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.107 [2024-11-06 14:32:40.716767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.107 [2024-11-06 14:32:40.716804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:20480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.107 [2024-11-06 14:32:40.716814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.107 [2024-11-06 14:32:40.728979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.107 [2024-11-06 14:32:40.729014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:6437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.107 [2024-11-06 14:32:40.729025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.364 [2024-11-06 14:32:40.742190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.364 [2024-11-06 14:32:40.742227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:11305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.364 [2024-11-06 14:32:40.742237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.364 [2024-11-06 14:32:40.755408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.364 [2024-11-06 14:32:40.755444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:7610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.364 [2024-11-06 14:32:40.755455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.364 [2024-11-06 14:32:40.768365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.364 [2024-11-06 14:32:40.768401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:5620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.364 [2024-11-06 14:32:40.768412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.364 [2024-11-06 14:32:40.780823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.364 [2024-11-06 14:32:40.780859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:21624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.364 [2024-11-06 14:32:40.780870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.364 [2024-11-06 14:32:40.793713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.364 [2024-11-06 14:32:40.793749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:24532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.364 [2024-11-06 14:32:40.793759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.364 [2024-11-06 14:32:40.806449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.364 [2024-11-06 14:32:40.806485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.364 [2024-11-06 14:32:40.806495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.364 [2024-11-06 14:32:40.821437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.364 [2024-11-06 14:32:40.821472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.364 [2024-11-06 14:32:40.821483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.364 [2024-11-06 14:32:40.834096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.364 [2024-11-06 14:32:40.834133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:10440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.364 [2024-11-06 14:32:40.834143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.364 [2024-11-06 14:32:40.846809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.364 [2024-11-06 14:32:40.846847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.364 [2024-11-06 14:32:40.846858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.364 [2024-11-06 14:32:40.859373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.364 [2024-11-06 14:32:40.859409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.364 [2024-11-06 14:32:40.859420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.364 [2024-11-06 14:32:40.872508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.364 [2024-11-06 14:32:40.872546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:13665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.364 [2024-11-06 14:32:40.872556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.365 [2024-11-06 14:32:40.885248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.365 [2024-11-06 14:32:40.885284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:11969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.365 [2024-11-06 14:32:40.885294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.365 [2024-11-06 14:32:40.897820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.365 [2024-11-06 14:32:40.897856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:8903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.365 [2024-11-06 14:32:40.897867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.365 [2024-11-06 14:32:40.910353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.365 [2024-11-06 14:32:40.910390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.365 [2024-11-06 14:32:40.910401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.365 [2024-11-06 14:32:40.923141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.365 [2024-11-06 14:32:40.923177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:17157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.365 [2024-11-06 14:32:40.923187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.365 [2024-11-06 14:32:40.935810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.365 [2024-11-06 14:32:40.935847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:24571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.365 [2024-11-06 14:32:40.935858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.365 [2024-11-06 14:32:40.948618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.365 [2024-11-06 14:32:40.948653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.365 [2024-11-06 14:32:40.948663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.365 [2024-11-06 14:32:40.961467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.365 [2024-11-06 14:32:40.961502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:12140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.365 [2024-11-06 14:32:40.961513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.365 [2024-11-06 14:32:40.974302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.365 [2024-11-06 14:32:40.974338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.365 [2024-11-06 14:32:40.974349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.365 [2024-11-06 14:32:40.987357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.365 [2024-11-06 14:32:40.987393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:17663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.365 [2024-11-06 14:32:40.987403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.623 [2024-11-06 14:32:40.999873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.623 [2024-11-06 14:32:40.999909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.623 [2024-11-06 14:32:40.999920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.623 [2024-11-06 14:32:41.012647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.623 [2024-11-06 14:32:41.012682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.623 [2024-11-06 14:32:41.012692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.623 [2024-11-06 14:32:41.025631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.623 [2024-11-06 14:32:41.025667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.623 [2024-11-06 14:32:41.025678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.624 [2024-11-06 14:32:41.038350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.624 [2024-11-06 14:32:41.038386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:1864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.624 [2024-11-06 14:32:41.038397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.624 [2024-11-06 14:32:41.051854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.624 [2024-11-06 14:32:41.051892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:8405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.624 [2024-11-06 14:32:41.051910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.624 [2024-11-06 14:32:41.064497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.624 [2024-11-06 14:32:41.064532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:10560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.624 [2024-11-06 14:32:41.064543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.624 [2024-11-06 14:32:41.077645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.624 [2024-11-06 14:32:41.077680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:12506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.624 [2024-11-06 14:32:41.077691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.624 [2024-11-06 14:32:41.091295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.624 [2024-11-06 14:32:41.091332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:17213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.624 [2024-11-06 14:32:41.091343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.624 19769.00 IOPS, 77.22 MiB/s [2024-11-06T14:32:41.259Z] [2024-11-06 14:32:41.105557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.624 [2024-11-06 14:32:41.105595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.624 [2024-11-06 14:32:41.105605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.624 [2024-11-06 14:32:41.118642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.624 [2024-11-06 14:32:41.118679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.624 [2024-11-06 14:32:41.118689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.624 [2024-11-06 14:32:41.131557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.624 [2024-11-06 14:32:41.131594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:14886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.624 [2024-11-06 14:32:41.131604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.624 [2024-11-06 14:32:41.144464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.624 [2024-11-06 14:32:41.144499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.624 [2024-11-06 14:32:41.144510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.624 [2024-11-06 14:32:41.157454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.624 [2024-11-06 14:32:41.157489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:16084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.624 [2024-11-06 14:32:41.157499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.624 [2024-11-06 14:32:41.170393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.624 [2024-11-06 14:32:41.170429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:14452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.624 [2024-11-06 14:32:41.170440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.624 [2024-11-06 14:32:41.183047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.624 [2024-11-06 14:32:41.183083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:20286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.624 [2024-11-06 14:32:41.183094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.624 [2024-11-06 14:32:41.196191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.624 [2024-11-06 14:32:41.196226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:19593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.624 [2024-11-06 14:32:41.196236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.624 [2024-11-06 14:32:41.208521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.624 [2024-11-06 14:32:41.208551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:1992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.624 [2024-11-06 14:32:41.208559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.624 [2024-11-06 14:32:41.220065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.624 [2024-11-06 14:32:41.220096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:5301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.624 [2024-11-06 14:32:41.220104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.624 [2024-11-06 14:32:41.233172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.624 [2024-11-06 14:32:41.233204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:14223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.624 [2024-11-06 14:32:41.233213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.624 [2024-11-06 14:32:41.244465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.624 [2024-11-06 14:32:41.244495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:15201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.624 [2024-11-06 14:32:41.244504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.624 [2024-11-06 14:32:41.256401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.624 [2024-11-06 14:32:41.256431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:4374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.624 [2024-11-06 14:32:41.256439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.883 [2024-11-06 14:32:41.268030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.883 [2024-11-06 14:32:41.268061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:24918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.883 [2024-11-06 14:32:41.268069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.883 [2024-11-06 14:32:41.279442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.883 [2024-11-06 14:32:41.279474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:4599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.883 [2024-11-06 14:32:41.279483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.883 [2024-11-06 14:32:41.289692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.883 [2024-11-06 14:32:41.289727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:19180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.883 [2024-11-06 14:32:41.289738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.883 [2024-11-06 14:32:41.300694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.883 [2024-11-06 14:32:41.300724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:19859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.883 [2024-11-06 14:32:41.300733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.883 [2024-11-06 14:32:41.313380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.883 [2024-11-06 14:32:41.313412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:15418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.883 [2024-11-06 14:32:41.313421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.883 [2024-11-06 14:32:41.325157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.883 [2024-11-06 14:32:41.325188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:10947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.883 [2024-11-06 14:32:41.325196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.883 [2024-11-06 14:32:41.338915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.883 [2024-11-06 14:32:41.338957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:8261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.883 [2024-11-06 14:32:41.338966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.883 [2024-11-06 14:32:41.348836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.883 [2024-11-06 14:32:41.348868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:6277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.883 [2024-11-06 14:32:41.348876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.883 [2024-11-06 14:32:41.360618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.883 [2024-11-06 14:32:41.360649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.883 [2024-11-06 14:32:41.360657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.883 [2024-11-06 14:32:41.372133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.883 [2024-11-06 14:32:41.372164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.883 [2024-11-06 14:32:41.372173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.883 [2024-11-06 14:32:41.383573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.883 [2024-11-06 14:32:41.383604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.883 [2024-11-06 14:32:41.383613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.883 [2024-11-06 14:32:41.394860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.883 [2024-11-06 14:32:41.394892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.883 [2024-11-06 14:32:41.394901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.883 [2024-11-06 14:32:41.406572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.883 [2024-11-06 14:32:41.406603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:24062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.883 [2024-11-06 14:32:41.406611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.883 [2024-11-06 14:32:41.418128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.883 [2024-11-06 14:32:41.418158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:10222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.883 [2024-11-06 14:32:41.418166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.883 [2024-11-06 14:32:41.429559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.883 [2024-11-06 14:32:41.429590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:14707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.883 [2024-11-06 14:32:41.429598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.883 [2024-11-06 14:32:41.441983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.883 [2024-11-06 14:32:41.442014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:21775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.883 [2024-11-06 14:32:41.442022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.883 [2024-11-06 14:32:41.453464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.883 [2024-11-06 14:32:41.453495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.883 [2024-11-06 14:32:41.453503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.883 [2024-11-06 14:32:41.465041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.883 [2024-11-06 14:32:41.465072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:2583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.883 [2024-11-06 14:32:41.465080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.883 [2024-11-06 14:32:41.476491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.883 [2024-11-06 14:32:41.476521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.883 [2024-11-06 14:32:41.476529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.883 [2024-11-06 14:32:41.487957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.883 [2024-11-06 14:32:41.487987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.883 [2024-11-06 14:32:41.487996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.884 [2024-11-06 14:32:41.499499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.884 [2024-11-06 14:32:41.499530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:16335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.884 [2024-11-06 14:32:41.499539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.884 [2024-11-06 14:32:41.511237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:13.884 [2024-11-06 14:32:41.511268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:12860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.884 [2024-11-06 14:32:41.511276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.142 [2024-11-06 14:32:41.522859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:14.142 [2024-11-06 14:32:41.522891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:21112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.142 [2024-11-06 14:32:41.522899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.142 [2024-11-06 14:32:41.535203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:14.142 [2024-11-06 14:32:41.535236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:20894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.142 [2024-11-06 14:32:41.535244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.142 [2024-11-06 14:32:41.546360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:14.142 [2024-11-06 14:32:41.546391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.142 [2024-11-06 14:32:41.546399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.142 [2024-11-06 14:32:41.557914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:14.142 [2024-11-06 14:32:41.557956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:10711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.142 [2024-11-06 14:32:41.557965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.142 [2024-11-06 14:32:41.569362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:14.142 [2024-11-06 14:32:41.569399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:7713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.142 [2024-11-06 14:32:41.569407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.142 [2024-11-06 14:32:41.581004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:14.142 [2024-11-06 14:32:41.581035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:8508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.142 [2024-11-06 14:32:41.581043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.142 [2024-11-06 14:32:41.591910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:14.142 [2024-11-06 14:32:41.591941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.142 [2024-11-06 14:32:41.591958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.142 [2024-11-06 14:32:41.603391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:14.142 [2024-11-06 14:32:41.603422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:19416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.142 [2024-11-06 14:32:41.603430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.142 [2024-11-06 14:32:41.615239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:14.142 [2024-11-06 14:32:41.615270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:6623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.142 [2024-11-06 14:32:41.615278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.142 [2024-11-06 14:32:41.626617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:14.142 [2024-11-06 14:32:41.626648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.142 [2024-11-06 14:32:41.626656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.142 [2024-11-06 14:32:41.638584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:14.142 [2024-11-06 14:32:41.638615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.142 [2024-11-06 14:32:41.638623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.142 [2024-11-06 14:32:41.650078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:14.142 [2024-11-06 14:32:41.650109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.142 [2024-11-06 14:32:41.650117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.142 [2024-11-06 14:32:41.661475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:14.142 [2024-11-06 14:32:41.661505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:5469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.142 [2024-11-06 14:32:41.661512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.142 [2024-11-06 14:32:41.673266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:14.142 [2024-11-06 14:32:41.673296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:5279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.142 [2024-11-06 14:32:41.673305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.142 [2024-11-06 14:32:41.684821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:14.142 [2024-11-06 14:32:41.684852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:4546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.142 [2024-11-06 14:32:41.684860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.142 [2024-11-06 14:32:41.696466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:14.142 [2024-11-06 14:32:41.696497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.142 [2024-11-06 14:32:41.696506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.142 [2024-11-06 14:32:41.708024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:14.142 [2024-11-06 14:32:41.708056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.142 [2024-11-06 14:32:41.708065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.143 [2024-11-06 14:32:41.719645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:14.143 [2024-11-06 14:32:41.719677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.143 [2024-11-06 14:32:41.719685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.143 [2024-11-06 14:32:41.731240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:14.143 [2024-11-06 14:32:41.731271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.143 [2024-11-06 14:32:41.731280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.143 [2024-11-06 14:32:41.743529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:14.143 [2024-11-06 14:32:41.743561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:11537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.143 [2024-11-06 14:32:41.743570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.143 [2024-11-06 14:32:41.755457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:14.143 [2024-11-06 14:32:41.755490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.143 [2024-11-06 14:32:41.755498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.143 [2024-11-06 14:32:41.767787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:14.143 [2024-11-06 14:32:41.767818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:23541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.143 [2024-11-06 14:32:41.767827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.400 [2024-11-06 14:32:41.779538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:14.400 [2024-11-06 14:32:41.779570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.400 [2024-11-06 14:32:41.779579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.400 [2024-11-06 14:32:41.791137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:14.400 [2024-11-06 14:32:41.791169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:11438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.400 [2024-11-06 14:32:41.791177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.400 [2024-11-06 14:32:41.802935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:14.400 [2024-11-06 14:32:41.802973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:8453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.400 [2024-11-06 14:32:41.802982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.400 [2024-11-06 14:32:41.814465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:14.400 [2024-11-06 14:32:41.814496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:15784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.400 [2024-11-06 14:32:41.814505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.401 [2024-11-06 14:32:41.826092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:14.401 [2024-11-06 14:32:41.826124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:1149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.401 [2024-11-06 14:32:41.826133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.401 [2024-11-06 14:32:41.837628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:14.401 [2024-11-06 14:32:41.837660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:5347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.401 [2024-11-06 14:32:41.837669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.401 [2024-11-06 14:32:41.849139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:14.401 [2024-11-06 14:32:41.849171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.401 [2024-11-06 14:32:41.849179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.401 [2024-11-06 14:32:41.862622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:14.401 [2024-11-06 14:32:41.862655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:17067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.401 [2024-11-06 14:32:41.862664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.401 [2024-11-06 14:32:41.874010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:14.401 [2024-11-06 14:32:41.874040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:1137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.401 [2024-11-06 14:32:41.874049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.401 [2024-11-06 14:32:41.885103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:14.401 [2024-11-06 14:32:41.885133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:14143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.401 [2024-11-06 14:32:41.885141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.401 [2024-11-06 14:32:41.896290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:14.401 [2024-11-06 14:32:41.896320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:23772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.401 [2024-11-06 14:32:41.896327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.401 [2024-11-06 14:32:41.907478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:14.401 [2024-11-06 14:32:41.907510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:20358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.401 [2024-11-06 14:32:41.907518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.401 [2024-11-06 14:32:41.918466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:14.401 [2024-11-06 14:32:41.918497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:12652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.401 [2024-11-06 14:32:41.918506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.401 [2024-11-06 14:32:41.929672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:14.401 [2024-11-06 14:32:41.929704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:9407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.401 [2024-11-06 14:32:41.929712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.401 [2024-11-06 14:32:41.939328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:14.401 [2024-11-06 14:32:41.939359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:13376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.401 [2024-11-06 14:32:41.939367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.401 [2024-11-06 14:32:41.950691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:14.401 [2024-11-06 14:32:41.950722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:7982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.401 [2024-11-06 14:32:41.950730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.401 [2024-11-06 14:32:41.963860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:14.401 [2024-11-06 14:32:41.963892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:1647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.401 [2024-11-06 14:32:41.963900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.401 [2024-11-06 14:32:41.975329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:14.401 [2024-11-06 14:32:41.975359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:23504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.401 [2024-11-06 14:32:41.975368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.401 [2024-11-06 14:32:41.986686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:14.401 [2024-11-06 14:32:41.986719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.401 [2024-11-06 14:32:41.986727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.401 [2024-11-06 14:32:41.998066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:14.401 [2024-11-06 14:32:41.998098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.401 [2024-11-06 14:32:41.998107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.401 [2024-11-06 14:32:42.009345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:14.401 [2024-11-06 14:32:42.009376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.401 [2024-11-06 14:32:42.009385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.401 [2024-11-06 14:32:42.020534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:14.401 [2024-11-06 14:32:42.020564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:10832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.401 [2024-11-06 14:32:42.020572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.401 [2024-11-06 14:32:42.031696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:14.401 [2024-11-06 14:32:42.031727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:18710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.401 [2024-11-06 14:32:42.031735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.660 [2024-11-06 14:32:42.042693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:14.660 [2024-11-06 14:32:42.042724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.660 [2024-11-06 14:32:42.042732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.660 [2024-11-06 14:32:42.054336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:14.660 [2024-11-06 14:32:42.054368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.660 [2024-11-06 14:32:42.054376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.660 [2024-11-06 14:32:42.065548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:14.660 [2024-11-06 14:32:42.065579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:14219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.660 [2024-11-06 14:32:42.065587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.660 [2024-11-06 14:32:42.076836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:14.660 [2024-11-06 14:32:42.076867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:3533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.660 [2024-11-06 14:32:42.076875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.660 [2024-11-06 14:32:42.088555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:14.660 [2024-11-06 14:32:42.088586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:7958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.660 [2024-11-06 14:32:42.088594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.660 20770.00 IOPS, 81.13 MiB/s [2024-11-06T14:32:42.295Z] [2024-11-06 14:32:42.100142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:14.660 [2024-11-06 14:32:42.100172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:20161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.660 [2024-11-06 14:32:42.100181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.660 00:28:14.660 Latency(us) 00:28:14.660 [2024-11-06T14:32:42.295Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:14.660 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:14.660 nvme0n1 : 2.00 20764.62 81.11 0.00 0.00 6155.61 3705.30 17543.48 00:28:14.660 [2024-11-06T14:32:42.295Z] =================================================================================================================== 00:28:14.660 [2024-11-06T14:32:42.295Z] Total : 20764.62 81.11 0.00 0.00 6155.61 3705.30 17543.48 00:28:14.660 { 00:28:14.660 "results": [ 00:28:14.660 { 00:28:14.660 "job": "nvme0n1", 00:28:14.660 "core_mask": "0x2", 00:28:14.660 "workload": "randread", 00:28:14.660 "status": "finished", 00:28:14.660 "queue_depth": 128, 00:28:14.660 "io_size": 4096, 00:28:14.660 "runtime": 2.0036, 00:28:14.660 "iops": 20764.623677380714, 00:28:14.660 "mibps": 81.11181123976841, 00:28:14.660 "io_failed": 0, 00:28:14.660 "io_timeout": 0, 00:28:14.660 "avg_latency_us": 6155.613103473778, 00:28:14.660 "min_latency_us": 3705.3046153846153, 00:28:14.660 "max_latency_us": 17543.483076923076 00:28:14.660 } 00:28:14.660 ], 00:28:14.660 "core_count": 1 00:28:14.660 } 00:28:14.660 14:32:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:14.660 14:32:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:14.660 14:32:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:14.660 | .driver_specific 00:28:14.660 | .nvme_error 00:28:14.660 | .status_code 00:28:14.660 | .command_transient_transport_error' 00:28:14.660 14:32:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:14.918 14:32:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 163 > 0 )) 00:28:14.918 14:32:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 103402 00:28:14.918 14:32:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 103402 ']' 00:28:14.918 14:32:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 103402 00:28:14.918 14:32:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:28:14.918 14:32:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:14.918 14:32:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 103402 00:28:14.918 14:32:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:14.918 14:32:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:14.918 killing process with pid 103402 00:28:14.918 14:32:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 103402' 00:28:14.918 Received shutdown signal, test time was about 2.000000 seconds 00:28:14.918 00:28:14.918 Latency(us) 00:28:14.918 [2024-11-06T14:32:42.553Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:14.918 [2024-11-06T14:32:42.553Z] =================================================================================================================== 00:28:14.918 [2024-11-06T14:32:42.553Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:14.918 14:32:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 103402 00:28:14.918 14:32:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 103402 00:28:15.484 14:32:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:28:15.484 14:32:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:15.484 14:32:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:15.484 14:32:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:15.484 14:32:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:15.484 14:32:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=103499 00:28:15.484 14:32:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 103499 /var/tmp/bperf.sock 00:28:15.484 14:32:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 103499 ']' 00:28:15.484 14:32:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:28:15.484 14:32:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:15.484 14:32:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:15.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:15.484 14:32:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:15.484 14:32:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:15.484 14:32:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:15.484 [2024-11-06 14:32:42.869466] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:28:15.484 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:15.484 Zero copy mechanism will not be used. 00:28:15.484 [2024-11-06 14:32:42.869556] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103499 ] 00:28:15.484 [2024-11-06 14:32:43.013537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:15.484 [2024-11-06 14:32:43.092568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:16.416 14:32:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:16.417 14:32:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:28:16.417 14:32:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:16.417 14:32:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:16.417 14:32:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:16.417 14:32:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.417 14:32:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:16.417 14:32:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.417 14:32:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:16.417 14:32:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:16.674 nvme0n1 00:28:16.674 14:32:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:16.674 14:32:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.674 14:32:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:16.674 14:32:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.674 14:32:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:16.674 14:32:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:16.951 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:16.951 Zero copy mechanism will not be used. 00:28:16.951 Running I/O for 2 seconds... 00:28:16.951 [2024-11-06 14:32:44.319737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:16.951 [2024-11-06 14:32:44.319787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.951 [2024-11-06 14:32:44.319799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.951 [2024-11-06 14:32:44.323609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:16.951 [2024-11-06 14:32:44.323644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.952 [2024-11-06 14:32:44.323654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.952 [2024-11-06 14:32:44.327249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:16.952 [2024-11-06 14:32:44.327282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.952 [2024-11-06 14:32:44.327291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.952 [2024-11-06 14:32:44.330803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:16.952 [2024-11-06 14:32:44.330835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.952 [2024-11-06 14:32:44.330844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.952 [2024-11-06 14:32:44.334424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:16.952 [2024-11-06 14:32:44.334456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.952 [2024-11-06 14:32:44.334465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.952 [2024-11-06 14:32:44.337924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:16.952 [2024-11-06 14:32:44.337968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.952 [2024-11-06 14:32:44.337978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.952 [2024-11-06 14:32:44.341382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:16.952 [2024-11-06 14:32:44.341415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.952 [2024-11-06 14:32:44.341424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.952 [2024-11-06 14:32:44.344996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:16.952 [2024-11-06 14:32:44.345027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.952 [2024-11-06 14:32:44.345036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.952 [2024-11-06 14:32:44.348663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:16.952 [2024-11-06 14:32:44.348696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.952 [2024-11-06 14:32:44.348705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.952 [2024-11-06 14:32:44.351152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:16.952 [2024-11-06 14:32:44.351184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.952 [2024-11-06 14:32:44.351193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.952 [2024-11-06 14:32:44.353940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:16.952 [2024-11-06 14:32:44.353983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.952 [2024-11-06 14:32:44.353992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.952 [2024-11-06 14:32:44.356887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:16.952 [2024-11-06 14:32:44.356919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.952 [2024-11-06 14:32:44.356928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.952 [2024-11-06 14:32:44.359325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:16.952 [2024-11-06 14:32:44.359356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.952 [2024-11-06 14:32:44.359365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.952 [2024-11-06 14:32:44.362462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:16.952 [2024-11-06 14:32:44.362499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.952 [2024-11-06 14:32:44.362511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.952 [2024-11-06 14:32:44.366183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:16.952 [2024-11-06 14:32:44.366217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.952 [2024-11-06 14:32:44.366227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.952 [2024-11-06 14:32:44.368759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:16.952 [2024-11-06 14:32:44.368791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.952 [2024-11-06 14:32:44.368800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.952 [2024-11-06 14:32:44.371774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:16.952 [2024-11-06 14:32:44.371806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.952 [2024-11-06 14:32:44.371815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.952 [2024-11-06 14:32:44.375236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:16.952 [2024-11-06 14:32:44.375269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.952 [2024-11-06 14:32:44.375278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.952 [2024-11-06 14:32:44.378171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:16.952 [2024-11-06 14:32:44.378203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.952 [2024-11-06 14:32:44.378212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.952 [2024-11-06 14:32:44.380323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:16.952 [2024-11-06 14:32:44.380353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.952 [2024-11-06 14:32:44.380361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.952 [2024-11-06 14:32:44.383466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:16.952 [2024-11-06 14:32:44.383499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.952 [2024-11-06 14:32:44.383508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.952 [2024-11-06 14:32:44.387028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:16.952 [2024-11-06 14:32:44.387060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.952 [2024-11-06 14:32:44.387068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.952 [2024-11-06 14:32:44.390371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:16.952 [2024-11-06 14:32:44.390405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.952 [2024-11-06 14:32:44.390414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.952 [2024-11-06 14:32:44.392597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:16.952 [2024-11-06 14:32:44.392629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.952 [2024-11-06 14:32:44.392638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.952 [2024-11-06 14:32:44.396188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:16.952 [2024-11-06 14:32:44.396220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.952 [2024-11-06 14:32:44.396229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.952 [2024-11-06 14:32:44.399296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:16.952 [2024-11-06 14:32:44.399328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.952 [2024-11-06 14:32:44.399337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.952 [2024-11-06 14:32:44.401814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:16.952 [2024-11-06 14:32:44.401845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.952 [2024-11-06 14:32:44.401854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.952 [2024-11-06 14:32:44.404395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:16.952 [2024-11-06 14:32:44.404426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.952 [2024-11-06 14:32:44.404435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.952 [2024-11-06 14:32:44.407188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:16.952 [2024-11-06 14:32:44.407219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.953 [2024-11-06 14:32:44.407228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.953 [2024-11-06 14:32:44.409513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:16.953 [2024-11-06 14:32:44.409544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.953 [2024-11-06 14:32:44.409552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.953 [2024-11-06 14:32:44.412295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:16.953 [2024-11-06 14:32:44.412326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.953 [2024-11-06 14:32:44.412335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.953 [2024-11-06 14:32:44.414439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:16.953 [2024-11-06 14:32:44.414472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.953 [2024-11-06 14:32:44.414480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.953 [2024-11-06 14:32:44.417232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:16.953 [2024-11-06 14:32:44.417264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.953 [2024-11-06 14:32:44.417272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.953 [2024-11-06 14:32:44.419999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:16.953 [2024-11-06 14:32:44.420028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.953 [2024-11-06 14:32:44.420037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.953 [2024-11-06 14:32:44.422342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:16.953 [2024-11-06 14:32:44.422374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.953 [2024-11-06 14:32:44.422383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.953 [2024-11-06 14:32:44.425216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:16.953 [2024-11-06 14:32:44.425247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.953 [2024-11-06 14:32:44.425256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.953 [2024-11-06 14:32:44.428294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:16.953 [2024-11-06 14:32:44.428325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.953 [2024-11-06 14:32:44.428334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.953 [2024-11-06 14:32:44.430764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:16.953 [2024-11-06 14:32:44.430795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.953 [2024-11-06 14:32:44.430804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.953 [2024-11-06 14:32:44.433989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:16.953 [2024-11-06 14:32:44.434019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.953 [2024-11-06 14:32:44.434028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.953 [2024-11-06 14:32:44.437523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:16.953 [2024-11-06 14:32:44.437556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.953 [2024-11-06 14:32:44.437565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.953 [2024-11-06 14:32:44.440722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:16.953 [2024-11-06 14:32:44.440753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.953 [2024-11-06 14:32:44.440761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.953 [2024-11-06 14:32:44.443122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:16.953 [2024-11-06 14:32:44.443152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.953 [2024-11-06 14:32:44.443161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.953 [2024-11-06 14:32:44.446519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:16.953 [2024-11-06 14:32:44.446551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.953 [2024-11-06 14:32:44.446559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.953 [2024-11-06 14:32:44.450019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:16.953 [2024-11-06 14:32:44.450049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.953 [2024-11-06 14:32:44.450058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.953 [2024-11-06 14:32:44.453498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:16.953 [2024-11-06 14:32:44.453531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.953 [2024-11-06 14:32:44.453539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.953 [2024-11-06 14:32:44.457088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:16.953 [2024-11-06 14:32:44.457119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.953 [2024-11-06 14:32:44.457128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.953 [2024-11-06 14:32:44.460524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:16.953 [2024-11-06 14:32:44.460556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.953 [2024-11-06 14:32:44.460564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.953 [2024-11-06 14:32:44.463925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:16.953 [2024-11-06 14:32:44.463968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.953 [2024-11-06 14:32:44.463976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.953 [2024-11-06 14:32:44.467441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:16.953 [2024-11-06 14:32:44.467473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.953 [2024-11-06 14:32:44.467482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.953 [2024-11-06 14:32:44.471009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:16.953 [2024-11-06 14:32:44.471039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.953 [2024-11-06 14:32:44.471047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.953 [2024-11-06 14:32:44.474433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:16.953 [2024-11-06 14:32:44.474465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.953 [2024-11-06 14:32:44.474474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.953 [2024-11-06 14:32:44.477870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:16.953 [2024-11-06 14:32:44.477902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.953 [2024-11-06 14:32:44.477911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.953 [2024-11-06 14:32:44.481338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:16.953 [2024-11-06 14:32:44.481370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.953 [2024-11-06 14:32:44.481378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.953 [2024-11-06 14:32:44.484820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:16.953 [2024-11-06 14:32:44.484851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.953 [2024-11-06 14:32:44.484860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.953 [2024-11-06 14:32:44.488302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:16.953 [2024-11-06 14:32:44.488333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.953 [2024-11-06 14:32:44.488341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.953 [2024-11-06 14:32:44.491967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:16.954 [2024-11-06 14:32:44.491998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.954 [2024-11-06 14:32:44.492007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.954 [2024-11-06 14:32:44.495437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:16.954 [2024-11-06 14:32:44.495469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.954 [2024-11-06 14:32:44.495477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.954 [2024-11-06 14:32:44.498933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:16.954 [2024-11-06 14:32:44.498974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.954 [2024-11-06 14:32:44.498982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.954 [2024-11-06 14:32:44.502314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:16.954 [2024-11-06 14:32:44.502347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.954 [2024-11-06 14:32:44.502355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.954 [2024-11-06 14:32:44.505770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:16.954 [2024-11-06 14:32:44.505803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.954 [2024-11-06 14:32:44.505811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.954 [2024-11-06 14:32:44.509324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:16.954 [2024-11-06 14:32:44.509356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.954 [2024-11-06 14:32:44.509364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.954 [2024-11-06 14:32:44.512793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:16.954 [2024-11-06 14:32:44.512824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.954 [2024-11-06 14:32:44.512833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.954 [2024-11-06 14:32:44.516395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:16.954 [2024-11-06 14:32:44.516427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.954 [2024-11-06 14:32:44.516435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.954 [2024-11-06 14:32:44.520959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:16.954 [2024-11-06 14:32:44.520991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.954 [2024-11-06 14:32:44.520999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.954 [2024-11-06 14:32:44.524381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:16.954 [2024-11-06 14:32:44.524412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.954 [2024-11-06 14:32:44.524420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.954 [2024-11-06 14:32:44.527896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:16.954 [2024-11-06 14:32:44.527929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.954 [2024-11-06 14:32:44.527938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.954 [2024-11-06 14:32:44.531387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:16.954 [2024-11-06 14:32:44.531420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.954 [2024-11-06 14:32:44.531428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.954 [2024-11-06 14:32:44.534966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:16.954 [2024-11-06 14:32:44.534997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.954 [2024-11-06 14:32:44.535006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.954 [2024-11-06 14:32:44.538523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:16.954 [2024-11-06 14:32:44.538556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.954 [2024-11-06 14:32:44.538565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.954 [2024-11-06 14:32:44.541997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:16.954 [2024-11-06 14:32:44.542028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.954 [2024-11-06 14:32:44.542036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.954 [2024-11-06 14:32:44.545588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:16.954 [2024-11-06 14:32:44.545620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.954 [2024-11-06 14:32:44.545630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.954 [2024-11-06 14:32:44.549087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:16.954 [2024-11-06 14:32:44.549118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.954 [2024-11-06 14:32:44.549127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.954 [2024-11-06 14:32:44.552657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:16.954 [2024-11-06 14:32:44.552688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.954 [2024-11-06 14:32:44.552697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.954 [2024-11-06 14:32:44.556157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:16.954 [2024-11-06 14:32:44.556188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.954 [2024-11-06 14:32:44.556203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.954 [2024-11-06 14:32:44.559648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:16.954 [2024-11-06 14:32:44.559680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.954 [2024-11-06 14:32:44.559689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.954 [2024-11-06 14:32:44.563107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:16.954 [2024-11-06 14:32:44.563138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.954 [2024-11-06 14:32:44.563147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.954 [2024-11-06 14:32:44.566592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:16.954 [2024-11-06 14:32:44.566625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.954 [2024-11-06 14:32:44.566633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.954 [2024-11-06 14:32:44.570129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:16.954 [2024-11-06 14:32:44.570161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.954 [2024-11-06 14:32:44.570169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.225 [2024-11-06 14:32:44.573725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.225 [2024-11-06 14:32:44.573756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.225 [2024-11-06 14:32:44.573765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.225 [2024-11-06 14:32:44.577231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.225 [2024-11-06 14:32:44.577263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.225 [2024-11-06 14:32:44.577272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.225 [2024-11-06 14:32:44.580647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.225 [2024-11-06 14:32:44.580678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.225 [2024-11-06 14:32:44.580687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.225 [2024-11-06 14:32:44.584122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.225 [2024-11-06 14:32:44.584153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.225 [2024-11-06 14:32:44.584162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.225 [2024-11-06 14:32:44.587574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.225 [2024-11-06 14:32:44.587607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.225 [2024-11-06 14:32:44.587616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.225 [2024-11-06 14:32:44.591109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.225 [2024-11-06 14:32:44.591141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.225 [2024-11-06 14:32:44.591150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.225 [2024-11-06 14:32:44.594567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.225 [2024-11-06 14:32:44.594599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.225 [2024-11-06 14:32:44.594608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.225 [2024-11-06 14:32:44.598032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.225 [2024-11-06 14:32:44.598061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.225 [2024-11-06 14:32:44.598070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.225 [2024-11-06 14:32:44.601536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.225 [2024-11-06 14:32:44.601568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.225 [2024-11-06 14:32:44.601577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.225 [2024-11-06 14:32:44.605001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.225 [2024-11-06 14:32:44.605030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.225 [2024-11-06 14:32:44.605038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.225 [2024-11-06 14:32:44.608577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.225 [2024-11-06 14:32:44.608609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.225 [2024-11-06 14:32:44.608617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.225 [2024-11-06 14:32:44.612105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.225 [2024-11-06 14:32:44.612136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.225 [2024-11-06 14:32:44.612145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.225 [2024-11-06 14:32:44.615677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.225 [2024-11-06 14:32:44.615708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.225 [2024-11-06 14:32:44.615716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.225 [2024-11-06 14:32:44.619118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.225 [2024-11-06 14:32:44.619149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.225 [2024-11-06 14:32:44.619157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.225 [2024-11-06 14:32:44.622564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.225 [2024-11-06 14:32:44.622597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.225 [2024-11-06 14:32:44.622606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.225 [2024-11-06 14:32:44.626087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.225 [2024-11-06 14:32:44.626118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.225 [2024-11-06 14:32:44.626127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.225 [2024-11-06 14:32:44.629535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.225 [2024-11-06 14:32:44.629568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.225 [2024-11-06 14:32:44.629576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.225 [2024-11-06 14:32:44.632962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.225 [2024-11-06 14:32:44.632992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.225 [2024-11-06 14:32:44.633001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.225 [2024-11-06 14:32:44.636620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.225 [2024-11-06 14:32:44.636652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.225 [2024-11-06 14:32:44.636660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.225 [2024-11-06 14:32:44.640086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.225 [2024-11-06 14:32:44.640117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.225 [2024-11-06 14:32:44.640125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.226 [2024-11-06 14:32:44.643495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.226 [2024-11-06 14:32:44.643528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.226 [2024-11-06 14:32:44.643536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.226 [2024-11-06 14:32:44.646971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.226 [2024-11-06 14:32:44.647025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.226 [2024-11-06 14:32:44.647033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.226 [2024-11-06 14:32:44.650511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.226 [2024-11-06 14:32:44.650543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.226 [2024-11-06 14:32:44.650551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.226 [2024-11-06 14:32:44.654055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.226 [2024-11-06 14:32:44.654086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.226 [2024-11-06 14:32:44.654095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.226 [2024-11-06 14:32:44.657630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.226 [2024-11-06 14:32:44.657663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.226 [2024-11-06 14:32:44.657671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.226 [2024-11-06 14:32:44.661164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.226 [2024-11-06 14:32:44.661196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.226 [2024-11-06 14:32:44.661204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.226 [2024-11-06 14:32:44.664835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.226 [2024-11-06 14:32:44.664867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.226 [2024-11-06 14:32:44.664877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.226 [2024-11-06 14:32:44.668424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.226 [2024-11-06 14:32:44.668455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.226 [2024-11-06 14:32:44.668464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.226 [2024-11-06 14:32:44.671968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.226 [2024-11-06 14:32:44.671998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.226 [2024-11-06 14:32:44.672007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.226 [2024-11-06 14:32:44.675437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.226 [2024-11-06 14:32:44.675469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.226 [2024-11-06 14:32:44.675477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.226 [2024-11-06 14:32:44.678901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.226 [2024-11-06 14:32:44.678932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.226 [2024-11-06 14:32:44.678941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.226 [2024-11-06 14:32:44.682373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.226 [2024-11-06 14:32:44.682405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.226 [2024-11-06 14:32:44.682413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.226 [2024-11-06 14:32:44.685851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.226 [2024-11-06 14:32:44.685883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.226 [2024-11-06 14:32:44.685891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.226 [2024-11-06 14:32:44.689379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.226 [2024-11-06 14:32:44.689411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.226 [2024-11-06 14:32:44.689419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.226 [2024-11-06 14:32:44.692928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.226 [2024-11-06 14:32:44.692973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.226 [2024-11-06 14:32:44.692982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.226 [2024-11-06 14:32:44.696464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.226 [2024-11-06 14:32:44.696495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.226 [2024-11-06 14:32:44.696504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.226 [2024-11-06 14:32:44.699958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.226 [2024-11-06 14:32:44.699988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.226 [2024-11-06 14:32:44.699996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.226 [2024-11-06 14:32:44.703474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.226 [2024-11-06 14:32:44.703506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.226 [2024-11-06 14:32:44.703514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.226 [2024-11-06 14:32:44.707067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.226 [2024-11-06 14:32:44.707104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.226 [2024-11-06 14:32:44.707112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.226 [2024-11-06 14:32:44.710559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.226 [2024-11-06 14:32:44.710591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.226 [2024-11-06 14:32:44.710600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.226 [2024-11-06 14:32:44.714024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.226 [2024-11-06 14:32:44.714054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.226 [2024-11-06 14:32:44.714063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.226 [2024-11-06 14:32:44.717522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.226 [2024-11-06 14:32:44.717554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.226 [2024-11-06 14:32:44.717562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.226 [2024-11-06 14:32:44.721078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.226 [2024-11-06 14:32:44.721109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.226 [2024-11-06 14:32:44.721118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.226 [2024-11-06 14:32:44.724449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.226 [2024-11-06 14:32:44.724481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.226 [2024-11-06 14:32:44.724490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.226 [2024-11-06 14:32:44.727900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.226 [2024-11-06 14:32:44.727932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.226 [2024-11-06 14:32:44.727940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.226 [2024-11-06 14:32:44.731419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.226 [2024-11-06 14:32:44.731452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.226 [2024-11-06 14:32:44.731460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.226 [2024-11-06 14:32:44.735012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.226 [2024-11-06 14:32:44.735042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.226 [2024-11-06 14:32:44.735051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.227 [2024-11-06 14:32:44.738585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.227 [2024-11-06 14:32:44.738617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.227 [2024-11-06 14:32:44.738626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.227 [2024-11-06 14:32:44.742080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.227 [2024-11-06 14:32:44.742111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.227 [2024-11-06 14:32:44.742120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.227 [2024-11-06 14:32:44.745556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.227 [2024-11-06 14:32:44.745588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.227 [2024-11-06 14:32:44.745597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.227 [2024-11-06 14:32:44.749051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.227 [2024-11-06 14:32:44.749081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.227 [2024-11-06 14:32:44.749089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.227 [2024-11-06 14:32:44.752544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.227 [2024-11-06 14:32:44.752576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.227 [2024-11-06 14:32:44.752585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.227 [2024-11-06 14:32:44.756022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.227 [2024-11-06 14:32:44.756051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.227 [2024-11-06 14:32:44.756060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.227 [2024-11-06 14:32:44.759553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.227 [2024-11-06 14:32:44.759585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.227 [2024-11-06 14:32:44.759593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.227 [2024-11-06 14:32:44.763169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.227 [2024-11-06 14:32:44.763202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.227 [2024-11-06 14:32:44.763210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.227 [2024-11-06 14:32:44.766783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.227 [2024-11-06 14:32:44.766815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.227 [2024-11-06 14:32:44.766824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.227 [2024-11-06 14:32:44.770281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.227 [2024-11-06 14:32:44.770314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.227 [2024-11-06 14:32:44.770322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.227 [2024-11-06 14:32:44.773685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.227 [2024-11-06 14:32:44.773718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.227 [2024-11-06 14:32:44.773726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.227 [2024-11-06 14:32:44.777179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.227 [2024-11-06 14:32:44.777210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.227 [2024-11-06 14:32:44.777218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.227 [2024-11-06 14:32:44.780614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.227 [2024-11-06 14:32:44.780646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.227 [2024-11-06 14:32:44.780655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.227 [2024-11-06 14:32:44.784138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.227 [2024-11-06 14:32:44.784170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.227 [2024-11-06 14:32:44.784178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.227 [2024-11-06 14:32:44.787599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.227 [2024-11-06 14:32:44.787631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.227 [2024-11-06 14:32:44.787639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.227 [2024-11-06 14:32:44.791160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.227 [2024-11-06 14:32:44.791191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.227 [2024-11-06 14:32:44.791200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.227 [2024-11-06 14:32:44.794687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.227 [2024-11-06 14:32:44.794718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.227 [2024-11-06 14:32:44.794726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.227 [2024-11-06 14:32:44.798208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.227 [2024-11-06 14:32:44.798241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.227 [2024-11-06 14:32:44.798249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.227 [2024-11-06 14:32:44.801718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.227 [2024-11-06 14:32:44.801750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.227 [2024-11-06 14:32:44.801759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.227 [2024-11-06 14:32:44.805202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.227 [2024-11-06 14:32:44.805235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.227 [2024-11-06 14:32:44.805243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.227 [2024-11-06 14:32:44.808688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.227 [2024-11-06 14:32:44.808721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.227 [2024-11-06 14:32:44.808730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.227 [2024-11-06 14:32:44.812202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.227 [2024-11-06 14:32:44.812233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.227 [2024-11-06 14:32:44.812241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.227 [2024-11-06 14:32:44.815659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.227 [2024-11-06 14:32:44.815691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.227 [2024-11-06 14:32:44.815699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.227 [2024-11-06 14:32:44.819163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.227 [2024-11-06 14:32:44.819192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.227 [2024-11-06 14:32:44.819201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.227 [2024-11-06 14:32:44.822654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.227 [2024-11-06 14:32:44.822686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.227 [2024-11-06 14:32:44.822695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.227 [2024-11-06 14:32:44.826184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.227 [2024-11-06 14:32:44.826215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.227 [2024-11-06 14:32:44.826223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.227 [2024-11-06 14:32:44.829775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.227 [2024-11-06 14:32:44.829806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.227 [2024-11-06 14:32:44.829815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.228 [2024-11-06 14:32:44.833339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.228 [2024-11-06 14:32:44.833371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.228 [2024-11-06 14:32:44.833380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.228 [2024-11-06 14:32:44.836856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.228 [2024-11-06 14:32:44.836887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.228 [2024-11-06 14:32:44.836895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.228 [2024-11-06 14:32:44.840457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.228 [2024-11-06 14:32:44.840488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.228 [2024-11-06 14:32:44.840497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.228 [2024-11-06 14:32:44.843981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.228 [2024-11-06 14:32:44.844011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.228 [2024-11-06 14:32:44.844019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.228 [2024-11-06 14:32:44.847504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.228 [2024-11-06 14:32:44.847538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.228 [2024-11-06 14:32:44.847546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.228 [2024-11-06 14:32:44.851042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.228 [2024-11-06 14:32:44.851072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.228 [2024-11-06 14:32:44.851081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.228 [2024-11-06 14:32:44.854541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.228 [2024-11-06 14:32:44.854573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.228 [2024-11-06 14:32:44.854581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.489 [2024-11-06 14:32:44.858099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.489 [2024-11-06 14:32:44.858130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.489 [2024-11-06 14:32:44.858139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.489 [2024-11-06 14:32:44.861676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.489 [2024-11-06 14:32:44.861709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.489 [2024-11-06 14:32:44.861717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.489 [2024-11-06 14:32:44.865117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.489 [2024-11-06 14:32:44.865149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.489 [2024-11-06 14:32:44.865157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.489 [2024-11-06 14:32:44.868683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.489 [2024-11-06 14:32:44.868716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.489 [2024-11-06 14:32:44.868726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.489 [2024-11-06 14:32:44.872215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.489 [2024-11-06 14:32:44.872247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.489 [2024-11-06 14:32:44.872256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.489 [2024-11-06 14:32:44.875792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.489 [2024-11-06 14:32:44.875825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.489 [2024-11-06 14:32:44.875833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.489 [2024-11-06 14:32:44.879301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.489 [2024-11-06 14:32:44.879334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.489 [2024-11-06 14:32:44.879343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.489 [2024-11-06 14:32:44.882797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.489 [2024-11-06 14:32:44.882829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.489 [2024-11-06 14:32:44.882838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.489 [2024-11-06 14:32:44.886295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.489 [2024-11-06 14:32:44.886326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.489 [2024-11-06 14:32:44.886334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.489 [2024-11-06 14:32:44.889765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.489 [2024-11-06 14:32:44.889798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.489 [2024-11-06 14:32:44.889806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.489 [2024-11-06 14:32:44.893253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.489 [2024-11-06 14:32:44.893285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.489 [2024-11-06 14:32:44.893293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.489 [2024-11-06 14:32:44.896655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.489 [2024-11-06 14:32:44.896687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.489 [2024-11-06 14:32:44.896695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.489 [2024-11-06 14:32:44.900165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.489 [2024-11-06 14:32:44.900197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.489 [2024-11-06 14:32:44.900205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.489 [2024-11-06 14:32:44.903764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.489 [2024-11-06 14:32:44.903796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.489 [2024-11-06 14:32:44.903805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.489 [2024-11-06 14:32:44.907255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.489 [2024-11-06 14:32:44.907288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.489 [2024-11-06 14:32:44.907297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.489 [2024-11-06 14:32:44.910705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.489 [2024-11-06 14:32:44.910737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.489 [2024-11-06 14:32:44.910746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.489 [2024-11-06 14:32:44.914185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.489 [2024-11-06 14:32:44.914217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.489 [2024-11-06 14:32:44.914225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.489 [2024-11-06 14:32:44.917688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.489 [2024-11-06 14:32:44.917720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.489 [2024-11-06 14:32:44.917728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.489 [2024-11-06 14:32:44.921158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.489 [2024-11-06 14:32:44.921191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.489 [2024-11-06 14:32:44.921199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.489 [2024-11-06 14:32:44.924626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.489 [2024-11-06 14:32:44.924658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.490 [2024-11-06 14:32:44.924666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.490 [2024-11-06 14:32:44.928094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.490 [2024-11-06 14:32:44.928125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.490 [2024-11-06 14:32:44.928133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.490 [2024-11-06 14:32:44.931563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.490 [2024-11-06 14:32:44.931595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.490 [2024-11-06 14:32:44.931604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.490 [2024-11-06 14:32:44.935003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.490 [2024-11-06 14:32:44.935034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.490 [2024-11-06 14:32:44.935042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.490 [2024-11-06 14:32:44.938525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.490 [2024-11-06 14:32:44.938557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.490 [2024-11-06 14:32:44.938565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.490 [2024-11-06 14:32:44.942021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.490 [2024-11-06 14:32:44.942050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.490 [2024-11-06 14:32:44.942058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.490 [2024-11-06 14:32:44.945501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.490 [2024-11-06 14:32:44.945534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.490 [2024-11-06 14:32:44.945543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.490 [2024-11-06 14:32:44.948920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.490 [2024-11-06 14:32:44.948996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.490 [2024-11-06 14:32:44.949006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.490 [2024-11-06 14:32:44.952464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.490 [2024-11-06 14:32:44.952496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.490 [2024-11-06 14:32:44.952504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.490 [2024-11-06 14:32:44.956027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.490 [2024-11-06 14:32:44.956057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.490 [2024-11-06 14:32:44.956066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.490 [2024-11-06 14:32:44.959530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.490 [2024-11-06 14:32:44.959562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.490 [2024-11-06 14:32:44.959570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.490 [2024-11-06 14:32:44.962976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.490 [2024-11-06 14:32:44.963007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.490 [2024-11-06 14:32:44.963015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.490 [2024-11-06 14:32:44.966438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.490 [2024-11-06 14:32:44.966470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.490 [2024-11-06 14:32:44.966478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.490 [2024-11-06 14:32:44.969855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.490 [2024-11-06 14:32:44.969888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.490 [2024-11-06 14:32:44.969897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.490 [2024-11-06 14:32:44.973388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.490 [2024-11-06 14:32:44.973421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.490 [2024-11-06 14:32:44.973430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.490 [2024-11-06 14:32:44.976837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.490 [2024-11-06 14:32:44.976869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.490 [2024-11-06 14:32:44.976877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.490 [2024-11-06 14:32:44.980238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.490 [2024-11-06 14:32:44.980277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.490 [2024-11-06 14:32:44.980286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.490 [2024-11-06 14:32:44.983699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.490 [2024-11-06 14:32:44.983731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.490 [2024-11-06 14:32:44.983740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.490 [2024-11-06 14:32:44.987118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.490 [2024-11-06 14:32:44.987150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.490 [2024-11-06 14:32:44.987158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.490 [2024-11-06 14:32:44.990638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.490 [2024-11-06 14:32:44.990670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.490 [2024-11-06 14:32:44.990679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.490 [2024-11-06 14:32:44.994118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.490 [2024-11-06 14:32:44.994149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.490 [2024-11-06 14:32:44.994157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.490 [2024-11-06 14:32:44.997587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.490 [2024-11-06 14:32:44.997620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.490 [2024-11-06 14:32:44.997629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.490 [2024-11-06 14:32:45.001097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.490 [2024-11-06 14:32:45.001129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.490 [2024-11-06 14:32:45.001137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.490 [2024-11-06 14:32:45.004534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.490 [2024-11-06 14:32:45.004566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.490 [2024-11-06 14:32:45.004582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.490 [2024-11-06 14:32:45.007975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.490 [2024-11-06 14:32:45.008005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.490 [2024-11-06 14:32:45.008013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.490 [2024-11-06 14:32:45.011393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.490 [2024-11-06 14:32:45.011425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.490 [2024-11-06 14:32:45.011434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.490 [2024-11-06 14:32:45.014845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.490 [2024-11-06 14:32:45.014878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.490 [2024-11-06 14:32:45.014886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.491 [2024-11-06 14:32:45.018388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.491 [2024-11-06 14:32:45.018419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.491 [2024-11-06 14:32:45.018428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.491 [2024-11-06 14:32:45.021892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.491 [2024-11-06 14:32:45.021924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.491 [2024-11-06 14:32:45.021932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.491 [2024-11-06 14:32:45.025327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.491 [2024-11-06 14:32:45.025359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.491 [2024-11-06 14:32:45.025367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.491 [2024-11-06 14:32:45.028734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.491 [2024-11-06 14:32:45.028766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.491 [2024-11-06 14:32:45.028775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.491 [2024-11-06 14:32:45.032167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.491 [2024-11-06 14:32:45.032199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.491 [2024-11-06 14:32:45.032208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.491 [2024-11-06 14:32:45.035589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.491 [2024-11-06 14:32:45.035622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.491 [2024-11-06 14:32:45.035630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.491 [2024-11-06 14:32:45.039047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.491 [2024-11-06 14:32:45.039078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.491 [2024-11-06 14:32:45.039087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.491 [2024-11-06 14:32:45.042524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.491 [2024-11-06 14:32:45.042557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.491 [2024-11-06 14:32:45.042565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.491 [2024-11-06 14:32:45.045971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.491 [2024-11-06 14:32:45.046001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.491 [2024-11-06 14:32:45.046010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.491 [2024-11-06 14:32:45.049405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.491 [2024-11-06 14:32:45.049437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.491 [2024-11-06 14:32:45.049445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.491 [2024-11-06 14:32:45.052799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.491 [2024-11-06 14:32:45.052831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.491 [2024-11-06 14:32:45.052839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.491 [2024-11-06 14:32:45.056215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.491 [2024-11-06 14:32:45.056247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.491 [2024-11-06 14:32:45.056255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.491 [2024-11-06 14:32:45.059594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.491 [2024-11-06 14:32:45.059626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.491 [2024-11-06 14:32:45.059635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.491 [2024-11-06 14:32:45.063008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.491 [2024-11-06 14:32:45.063037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.491 [2024-11-06 14:32:45.063045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.491 [2024-11-06 14:32:45.066397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.491 [2024-11-06 14:32:45.066429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.491 [2024-11-06 14:32:45.066437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.491 [2024-11-06 14:32:45.069793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.491 [2024-11-06 14:32:45.069824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.491 [2024-11-06 14:32:45.069832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.491 [2024-11-06 14:32:45.073298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.491 [2024-11-06 14:32:45.073330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.491 [2024-11-06 14:32:45.073338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.491 [2024-11-06 14:32:45.076688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.491 [2024-11-06 14:32:45.076720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.491 [2024-11-06 14:32:45.076728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.491 [2024-11-06 14:32:45.080077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.491 [2024-11-06 14:32:45.080108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.491 [2024-11-06 14:32:45.080116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.491 [2024-11-06 14:32:45.083419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.491 [2024-11-06 14:32:45.083451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.491 [2024-11-06 14:32:45.083459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.491 [2024-11-06 14:32:45.086793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.491 [2024-11-06 14:32:45.086825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.491 [2024-11-06 14:32:45.086833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.491 [2024-11-06 14:32:45.090136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.491 [2024-11-06 14:32:45.090166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.491 [2024-11-06 14:32:45.090175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.491 [2024-11-06 14:32:45.093505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.491 [2024-11-06 14:32:45.093536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.491 [2024-11-06 14:32:45.093545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.491 [2024-11-06 14:32:45.096908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.491 [2024-11-06 14:32:45.096940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.491 [2024-11-06 14:32:45.096959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.491 [2024-11-06 14:32:45.100291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.491 [2024-11-06 14:32:45.100321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.491 [2024-11-06 14:32:45.100329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.491 [2024-11-06 14:32:45.103655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.491 [2024-11-06 14:32:45.103687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.491 [2024-11-06 14:32:45.103695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.491 [2024-11-06 14:32:45.107069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.491 [2024-11-06 14:32:45.107100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.492 [2024-11-06 14:32:45.107108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.492 [2024-11-06 14:32:45.110549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.492 [2024-11-06 14:32:45.110580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.492 [2024-11-06 14:32:45.110588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.492 [2024-11-06 14:32:45.113886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.492 [2024-11-06 14:32:45.113917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.492 [2024-11-06 14:32:45.113925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.492 [2024-11-06 14:32:45.117310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.492 [2024-11-06 14:32:45.117342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.492 [2024-11-06 14:32:45.117350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.492 [2024-11-06 14:32:45.120790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.492 [2024-11-06 14:32:45.120820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.492 [2024-11-06 14:32:45.120828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.752 [2024-11-06 14:32:45.124139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.752 [2024-11-06 14:32:45.124171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.752 [2024-11-06 14:32:45.124179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.752 [2024-11-06 14:32:45.127490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.752 [2024-11-06 14:32:45.127521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.752 [2024-11-06 14:32:45.127529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.752 [2024-11-06 14:32:45.130864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.752 [2024-11-06 14:32:45.130895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.752 [2024-11-06 14:32:45.130903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.752 [2024-11-06 14:32:45.134248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.752 [2024-11-06 14:32:45.134279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.752 [2024-11-06 14:32:45.134288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.752 [2024-11-06 14:32:45.137598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.752 [2024-11-06 14:32:45.137631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.752 [2024-11-06 14:32:45.137639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.752 [2024-11-06 14:32:45.141023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.752 [2024-11-06 14:32:45.141052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.752 [2024-11-06 14:32:45.141060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.752 [2024-11-06 14:32:45.144521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.752 [2024-11-06 14:32:45.144552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.752 [2024-11-06 14:32:45.144560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.752 [2024-11-06 14:32:45.147914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.752 [2024-11-06 14:32:45.147959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.752 [2024-11-06 14:32:45.147968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.752 [2024-11-06 14:32:45.151310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.752 [2024-11-06 14:32:45.151348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.752 [2024-11-06 14:32:45.151356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.752 [2024-11-06 14:32:45.154832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.752 [2024-11-06 14:32:45.154864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.752 [2024-11-06 14:32:45.154873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.752 [2024-11-06 14:32:45.158182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.752 [2024-11-06 14:32:45.158213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.752 [2024-11-06 14:32:45.158222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.752 [2024-11-06 14:32:45.161569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.752 [2024-11-06 14:32:45.161601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.752 [2024-11-06 14:32:45.161609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.752 [2024-11-06 14:32:45.165013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.752 [2024-11-06 14:32:45.165042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.752 [2024-11-06 14:32:45.165050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.752 [2024-11-06 14:32:45.168393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.752 [2024-11-06 14:32:45.168424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.752 [2024-11-06 14:32:45.168433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.752 [2024-11-06 14:32:45.171720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.752 [2024-11-06 14:32:45.171752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.752 [2024-11-06 14:32:45.171760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.752 [2024-11-06 14:32:45.175102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.752 [2024-11-06 14:32:45.175132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.752 [2024-11-06 14:32:45.175140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.752 [2024-11-06 14:32:45.178471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.752 [2024-11-06 14:32:45.178504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.752 [2024-11-06 14:32:45.178512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.752 [2024-11-06 14:32:45.181761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.752 [2024-11-06 14:32:45.181793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.752 [2024-11-06 14:32:45.181801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.752 [2024-11-06 14:32:45.185210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.752 [2024-11-06 14:32:45.185241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.752 [2024-11-06 14:32:45.185249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.753 [2024-11-06 14:32:45.188529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.753 [2024-11-06 14:32:45.188561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.753 [2024-11-06 14:32:45.188569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.753 [2024-11-06 14:32:45.191871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.753 [2024-11-06 14:32:45.191902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.753 [2024-11-06 14:32:45.191910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.753 [2024-11-06 14:32:45.195216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.753 [2024-11-06 14:32:45.195248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.753 [2024-11-06 14:32:45.195255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.753 [2024-11-06 14:32:45.198536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.753 [2024-11-06 14:32:45.198569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.753 [2024-11-06 14:32:45.198577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.753 [2024-11-06 14:32:45.201883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.753 [2024-11-06 14:32:45.201916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.753 [2024-11-06 14:32:45.201924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.753 [2024-11-06 14:32:45.205329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.753 [2024-11-06 14:32:45.205361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.753 [2024-11-06 14:32:45.205369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.753 [2024-11-06 14:32:45.208634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.753 [2024-11-06 14:32:45.208665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.753 [2024-11-06 14:32:45.208674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.753 [2024-11-06 14:32:45.212062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.753 [2024-11-06 14:32:45.212092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.753 [2024-11-06 14:32:45.212100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.753 [2024-11-06 14:32:45.215514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.753 [2024-11-06 14:32:45.215546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.753 [2024-11-06 14:32:45.215554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.753 [2024-11-06 14:32:45.218894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.753 [2024-11-06 14:32:45.218927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.753 [2024-11-06 14:32:45.218935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.753 [2024-11-06 14:32:45.222449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.753 [2024-11-06 14:32:45.222481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.753 [2024-11-06 14:32:45.222490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.753 [2024-11-06 14:32:45.225827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.753 [2024-11-06 14:32:45.225860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.753 [2024-11-06 14:32:45.225868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.753 [2024-11-06 14:32:45.229378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.753 [2024-11-06 14:32:45.229410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.753 [2024-11-06 14:32:45.229418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.753 [2024-11-06 14:32:45.232819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.753 [2024-11-06 14:32:45.232851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.753 [2024-11-06 14:32:45.232859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.753 [2024-11-06 14:32:45.236240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.753 [2024-11-06 14:32:45.236284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.753 [2024-11-06 14:32:45.236293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.753 [2024-11-06 14:32:45.239685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.753 [2024-11-06 14:32:45.239717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.753 [2024-11-06 14:32:45.239725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.753 [2024-11-06 14:32:45.243239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.753 [2024-11-06 14:32:45.243272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.753 [2024-11-06 14:32:45.243280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.753 [2024-11-06 14:32:45.246873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.753 [2024-11-06 14:32:45.246905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.753 [2024-11-06 14:32:45.246914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.753 [2024-11-06 14:32:45.250362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.753 [2024-11-06 14:32:45.250395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.753 [2024-11-06 14:32:45.250403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.753 [2024-11-06 14:32:45.253843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.753 [2024-11-06 14:32:45.253875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.753 [2024-11-06 14:32:45.253883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.753 [2024-11-06 14:32:45.257392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.753 [2024-11-06 14:32:45.257425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.753 [2024-11-06 14:32:45.257433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.753 [2024-11-06 14:32:45.260969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.753 [2024-11-06 14:32:45.260999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.753 [2024-11-06 14:32:45.261007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.753 [2024-11-06 14:32:45.264433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.753 [2024-11-06 14:32:45.264464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.753 [2024-11-06 14:32:45.264473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.753 [2024-11-06 14:32:45.268016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.753 [2024-11-06 14:32:45.268048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.753 [2024-11-06 14:32:45.268056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.753 [2024-11-06 14:32:45.271456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.753 [2024-11-06 14:32:45.271488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.753 [2024-11-06 14:32:45.271497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.753 [2024-11-06 14:32:45.274888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.753 [2024-11-06 14:32:45.274921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.753 [2024-11-06 14:32:45.274929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.753 [2024-11-06 14:32:45.278357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.753 [2024-11-06 14:32:45.278390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.754 [2024-11-06 14:32:45.278399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.754 [2024-11-06 14:32:45.281762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.754 [2024-11-06 14:32:45.281796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.754 [2024-11-06 14:32:45.281804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.754 [2024-11-06 14:32:45.285174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.754 [2024-11-06 14:32:45.285206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.754 [2024-11-06 14:32:45.285215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.754 [2024-11-06 14:32:45.288549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.754 [2024-11-06 14:32:45.288580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.754 [2024-11-06 14:32:45.288589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.754 [2024-11-06 14:32:45.291993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.754 [2024-11-06 14:32:45.292024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.754 [2024-11-06 14:32:45.292032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.754 [2024-11-06 14:32:45.295528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.754 [2024-11-06 14:32:45.295561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.754 [2024-11-06 14:32:45.295570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.754 [2024-11-06 14:32:45.299018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.754 [2024-11-06 14:32:45.299048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.754 [2024-11-06 14:32:45.299056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.754 [2024-11-06 14:32:45.302454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.754 [2024-11-06 14:32:45.302486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.754 [2024-11-06 14:32:45.302495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.754 [2024-11-06 14:32:45.305866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.754 [2024-11-06 14:32:45.305899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.754 [2024-11-06 14:32:45.305907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.754 [2024-11-06 14:32:45.309385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.754 [2024-11-06 14:32:45.309418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.754 [2024-11-06 14:32:45.309426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.754 9045.00 IOPS, 1130.62 MiB/s [2024-11-06T14:32:45.389Z] [2024-11-06 14:32:45.314301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.754 [2024-11-06 14:32:45.314333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.754 [2024-11-06 14:32:45.314342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.754 [2024-11-06 14:32:45.317793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.754 [2024-11-06 14:32:45.317826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.754 [2024-11-06 14:32:45.317835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.754 [2024-11-06 14:32:45.321389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.754 [2024-11-06 14:32:45.321421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.754 [2024-11-06 14:32:45.321429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.754 [2024-11-06 14:32:45.324915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.754 [2024-11-06 14:32:45.324958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.754 [2024-11-06 14:32:45.324967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.754 [2024-11-06 14:32:45.328391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.754 [2024-11-06 14:32:45.328422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.754 [2024-11-06 14:32:45.328430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.754 [2024-11-06 14:32:45.331804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.754 [2024-11-06 14:32:45.331837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.754 [2024-11-06 14:32:45.331845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.754 [2024-11-06 14:32:45.335257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.754 [2024-11-06 14:32:45.335290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.754 [2024-11-06 14:32:45.335298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.754 [2024-11-06 14:32:45.338819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.754 [2024-11-06 14:32:45.338852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.754 [2024-11-06 14:32:45.338860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.754 [2024-11-06 14:32:45.342512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.754 [2024-11-06 14:32:45.342544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.754 [2024-11-06 14:32:45.342552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.754 [2024-11-06 14:32:45.345964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.754 [2024-11-06 14:32:45.345994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.754 [2024-11-06 14:32:45.346002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.754 [2024-11-06 14:32:45.349377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.754 [2024-11-06 14:32:45.349409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.754 [2024-11-06 14:32:45.349418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.754 [2024-11-06 14:32:45.352941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.754 [2024-11-06 14:32:45.352986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.754 [2024-11-06 14:32:45.352995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.754 [2024-11-06 14:32:45.356383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.754 [2024-11-06 14:32:45.356413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.754 [2024-11-06 14:32:45.356422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.754 [2024-11-06 14:32:45.359763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.754 [2024-11-06 14:32:45.359796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.754 [2024-11-06 14:32:45.359804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.754 [2024-11-06 14:32:45.363248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.754 [2024-11-06 14:32:45.363280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.754 [2024-11-06 14:32:45.363289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.754 [2024-11-06 14:32:45.366884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.754 [2024-11-06 14:32:45.366916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.754 [2024-11-06 14:32:45.366924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.754 [2024-11-06 14:32:45.370367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.754 [2024-11-06 14:32:45.370399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.754 [2024-11-06 14:32:45.370408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.754 [2024-11-06 14:32:45.373788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.755 [2024-11-06 14:32:45.373822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.755 [2024-11-06 14:32:45.373831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.755 [2024-11-06 14:32:45.377360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.755 [2024-11-06 14:32:45.377393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.755 [2024-11-06 14:32:45.377401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.755 [2024-11-06 14:32:45.380915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:17.755 [2024-11-06 14:32:45.381032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.755 [2024-11-06 14:32:45.381051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.014 [2024-11-06 14:32:45.384378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.014 [2024-11-06 14:32:45.384410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.014 [2024-11-06 14:32:45.384418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.014 [2024-11-06 14:32:45.387789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.014 [2024-11-06 14:32:45.387822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.014 [2024-11-06 14:32:45.387831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.014 [2024-11-06 14:32:45.391258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.014 [2024-11-06 14:32:45.391290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.014 [2024-11-06 14:32:45.391299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.014 [2024-11-06 14:32:45.394617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.014 [2024-11-06 14:32:45.394649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.014 [2024-11-06 14:32:45.394658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.014 [2024-11-06 14:32:45.398041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.014 [2024-11-06 14:32:45.398072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.014 [2024-11-06 14:32:45.398080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.014 [2024-11-06 14:32:45.401502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.014 [2024-11-06 14:32:45.401534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.014 [2024-11-06 14:32:45.401543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.014 [2024-11-06 14:32:45.405021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.014 [2024-11-06 14:32:45.405062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.014 [2024-11-06 14:32:45.405071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.014 [2024-11-06 14:32:45.408445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.014 [2024-11-06 14:32:45.408476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.014 [2024-11-06 14:32:45.408484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.014 [2024-11-06 14:32:45.411887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.014 [2024-11-06 14:32:45.411919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.015 [2024-11-06 14:32:45.411928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.015 [2024-11-06 14:32:45.415296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.015 [2024-11-06 14:32:45.415328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.015 [2024-11-06 14:32:45.415337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.015 [2024-11-06 14:32:45.418737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.015 [2024-11-06 14:32:45.418770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.015 [2024-11-06 14:32:45.418778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.015 [2024-11-06 14:32:45.422214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.015 [2024-11-06 14:32:45.422246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.015 [2024-11-06 14:32:45.422256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.015 [2024-11-06 14:32:45.425664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.015 [2024-11-06 14:32:45.425697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.015 [2024-11-06 14:32:45.425705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.015 [2024-11-06 14:32:45.429157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.015 [2024-11-06 14:32:45.429190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.015 [2024-11-06 14:32:45.429199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.015 [2024-11-06 14:32:45.432651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.015 [2024-11-06 14:32:45.432683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.015 [2024-11-06 14:32:45.432691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.015 [2024-11-06 14:32:45.436130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.015 [2024-11-06 14:32:45.436162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.015 [2024-11-06 14:32:45.436170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.015 [2024-11-06 14:32:45.439522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.015 [2024-11-06 14:32:45.439554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.015 [2024-11-06 14:32:45.439563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.015 [2024-11-06 14:32:45.442924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.015 [2024-11-06 14:32:45.442969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.015 [2024-11-06 14:32:45.442978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.015 [2024-11-06 14:32:45.446360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.015 [2024-11-06 14:32:45.446394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.015 [2024-11-06 14:32:45.446409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.015 [2024-11-06 14:32:45.449877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.015 [2024-11-06 14:32:45.449911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.015 [2024-11-06 14:32:45.449919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.015 [2024-11-06 14:32:45.453352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.015 [2024-11-06 14:32:45.453386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.015 [2024-11-06 14:32:45.453394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.015 [2024-11-06 14:32:45.456834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.015 [2024-11-06 14:32:45.456867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.015 [2024-11-06 14:32:45.456875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.015 [2024-11-06 14:32:45.460310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.015 [2024-11-06 14:32:45.460340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.015 [2024-11-06 14:32:45.460348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.015 [2024-11-06 14:32:45.463744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.015 [2024-11-06 14:32:45.463776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.015 [2024-11-06 14:32:45.463785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.015 [2024-11-06 14:32:45.467206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.015 [2024-11-06 14:32:45.467238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.015 [2024-11-06 14:32:45.467246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.015 [2024-11-06 14:32:45.470675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.015 [2024-11-06 14:32:45.470708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.015 [2024-11-06 14:32:45.470716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.015 [2024-11-06 14:32:45.474156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.015 [2024-11-06 14:32:45.474189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.015 [2024-11-06 14:32:45.474197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.015 [2024-11-06 14:32:45.477601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.015 [2024-11-06 14:32:45.477634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.015 [2024-11-06 14:32:45.477643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.015 [2024-11-06 14:32:45.481058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.015 [2024-11-06 14:32:45.481089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.015 [2024-11-06 14:32:45.481097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.015 [2024-11-06 14:32:45.484512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.015 [2024-11-06 14:32:45.484543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.015 [2024-11-06 14:32:45.484551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.015 [2024-11-06 14:32:45.487919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.015 [2024-11-06 14:32:45.487962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.015 [2024-11-06 14:32:45.487971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.015 [2024-11-06 14:32:45.491449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.015 [2024-11-06 14:32:45.491482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.015 [2024-11-06 14:32:45.491491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.015 [2024-11-06 14:32:45.494992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.015 [2024-11-06 14:32:45.495023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.015 [2024-11-06 14:32:45.495031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.015 [2024-11-06 14:32:45.498541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.016 [2024-11-06 14:32:45.498574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.016 [2024-11-06 14:32:45.498582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.016 [2024-11-06 14:32:45.501989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.016 [2024-11-06 14:32:45.502020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.016 [2024-11-06 14:32:45.502029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.016 [2024-11-06 14:32:45.505512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.016 [2024-11-06 14:32:45.505544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.016 [2024-11-06 14:32:45.505553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.016 [2024-11-06 14:32:45.508974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.016 [2024-11-06 14:32:45.509004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.016 [2024-11-06 14:32:45.509021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.016 [2024-11-06 14:32:45.512369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.016 [2024-11-06 14:32:45.512398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.016 [2024-11-06 14:32:45.512407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.016 [2024-11-06 14:32:45.515731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.016 [2024-11-06 14:32:45.515764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.016 [2024-11-06 14:32:45.515772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.016 [2024-11-06 14:32:45.519171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.016 [2024-11-06 14:32:45.519203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.016 [2024-11-06 14:32:45.519211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.016 [2024-11-06 14:32:45.522767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.016 [2024-11-06 14:32:45.522800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.016 [2024-11-06 14:32:45.522809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.016 [2024-11-06 14:32:45.526194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.016 [2024-11-06 14:32:45.526227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.016 [2024-11-06 14:32:45.526235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.016 [2024-11-06 14:32:45.529601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.016 [2024-11-06 14:32:45.529634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.016 [2024-11-06 14:32:45.529642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.016 [2024-11-06 14:32:45.533117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.016 [2024-11-06 14:32:45.533149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.016 [2024-11-06 14:32:45.533157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.016 [2024-11-06 14:32:45.536566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.016 [2024-11-06 14:32:45.536598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.016 [2024-11-06 14:32:45.536607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.016 [2024-11-06 14:32:45.539932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.016 [2024-11-06 14:32:45.539976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.016 [2024-11-06 14:32:45.539984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.016 [2024-11-06 14:32:45.543355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.016 [2024-11-06 14:32:45.543387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.016 [2024-11-06 14:32:45.543396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.016 [2024-11-06 14:32:45.546837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.016 [2024-11-06 14:32:45.546869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.016 [2024-11-06 14:32:45.546878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.016 [2024-11-06 14:32:45.550400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.016 [2024-11-06 14:32:45.550433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.016 [2024-11-06 14:32:45.550442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.016 [2024-11-06 14:32:45.553892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.016 [2024-11-06 14:32:45.553924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.016 [2024-11-06 14:32:45.553932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.016 [2024-11-06 14:32:45.557401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.016 [2024-11-06 14:32:45.557434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.016 [2024-11-06 14:32:45.557443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.016 [2024-11-06 14:32:45.560992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.016 [2024-11-06 14:32:45.561023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.016 [2024-11-06 14:32:45.561032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.016 [2024-11-06 14:32:45.564484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.016 [2024-11-06 14:32:45.564516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.016 [2024-11-06 14:32:45.564524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.016 [2024-11-06 14:32:45.567918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.016 [2024-11-06 14:32:45.567961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.016 [2024-11-06 14:32:45.567971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.016 [2024-11-06 14:32:45.571447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.016 [2024-11-06 14:32:45.571480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.016 [2024-11-06 14:32:45.571488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.016 [2024-11-06 14:32:45.574935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.016 [2024-11-06 14:32:45.574975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.016 [2024-11-06 14:32:45.574983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.016 [2024-11-06 14:32:45.578510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.016 [2024-11-06 14:32:45.578542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.016 [2024-11-06 14:32:45.578550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.016 [2024-11-06 14:32:45.582034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.016 [2024-11-06 14:32:45.582064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.016 [2024-11-06 14:32:45.582072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.016 [2024-11-06 14:32:45.585435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.016 [2024-11-06 14:32:45.585468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.016 [2024-11-06 14:32:45.585476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.016 [2024-11-06 14:32:45.588839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.016 [2024-11-06 14:32:45.588870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.017 [2024-11-06 14:32:45.588878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.017 [2024-11-06 14:32:45.592320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.017 [2024-11-06 14:32:45.592350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.017 [2024-11-06 14:32:45.592359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.017 [2024-11-06 14:32:45.595877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.017 [2024-11-06 14:32:45.595917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.017 [2024-11-06 14:32:45.595928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.017 [2024-11-06 14:32:45.599505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.017 [2024-11-06 14:32:45.599540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.017 [2024-11-06 14:32:45.599548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.017 [2024-11-06 14:32:45.602887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.017 [2024-11-06 14:32:45.602920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.017 [2024-11-06 14:32:45.602928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.017 [2024-11-06 14:32:45.606315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.017 [2024-11-06 14:32:45.606348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.017 [2024-11-06 14:32:45.606356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.017 [2024-11-06 14:32:45.609852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.017 [2024-11-06 14:32:45.609886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.017 [2024-11-06 14:32:45.609894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.017 [2024-11-06 14:32:45.613356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.017 [2024-11-06 14:32:45.613390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.017 [2024-11-06 14:32:45.613399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.017 [2024-11-06 14:32:45.616814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.017 [2024-11-06 14:32:45.616846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.017 [2024-11-06 14:32:45.616854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.017 [2024-11-06 14:32:45.620379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.017 [2024-11-06 14:32:45.620411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.017 [2024-11-06 14:32:45.620420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.017 [2024-11-06 14:32:45.623818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.017 [2024-11-06 14:32:45.623850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.017 [2024-11-06 14:32:45.623858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.017 [2024-11-06 14:32:45.627368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.017 [2024-11-06 14:32:45.627401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.017 [2024-11-06 14:32:45.627409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.017 [2024-11-06 14:32:45.630851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.017 [2024-11-06 14:32:45.630883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.017 [2024-11-06 14:32:45.630892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.017 [2024-11-06 14:32:45.634349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.017 [2024-11-06 14:32:45.634382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.017 [2024-11-06 14:32:45.634391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.017 [2024-11-06 14:32:45.637782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.017 [2024-11-06 14:32:45.637815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.017 [2024-11-06 14:32:45.637823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.017 [2024-11-06 14:32:45.641233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.017 [2024-11-06 14:32:45.641265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.017 [2024-11-06 14:32:45.641273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.017 [2024-11-06 14:32:45.644634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.017 [2024-11-06 14:32:45.644667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.017 [2024-11-06 14:32:45.644675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.277 [2024-11-06 14:32:45.648063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.277 [2024-11-06 14:32:45.648094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.277 [2024-11-06 14:32:45.648102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.277 [2024-11-06 14:32:45.651784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.277 [2024-11-06 14:32:45.651817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.277 [2024-11-06 14:32:45.651826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.277 [2024-11-06 14:32:45.655271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.277 [2024-11-06 14:32:45.655303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.277 [2024-11-06 14:32:45.655311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.278 [2024-11-06 14:32:45.658765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.278 [2024-11-06 14:32:45.658797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.278 [2024-11-06 14:32:45.658806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.278 [2024-11-06 14:32:45.662225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.278 [2024-11-06 14:32:45.662258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.278 [2024-11-06 14:32:45.662266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.278 [2024-11-06 14:32:45.665633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.278 [2024-11-06 14:32:45.665666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.278 [2024-11-06 14:32:45.665675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.278 [2024-11-06 14:32:45.669191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.278 [2024-11-06 14:32:45.669222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.278 [2024-11-06 14:32:45.669231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.278 [2024-11-06 14:32:45.672755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.278 [2024-11-06 14:32:45.672786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.278 [2024-11-06 14:32:45.672795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.278 [2024-11-06 14:32:45.676191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.278 [2024-11-06 14:32:45.676224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.278 [2024-11-06 14:32:45.676232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.278 [2024-11-06 14:32:45.679583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.278 [2024-11-06 14:32:45.679616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.278 [2024-11-06 14:32:45.679624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.278 [2024-11-06 14:32:45.683116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.278 [2024-11-06 14:32:45.683148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.278 [2024-11-06 14:32:45.683156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.278 [2024-11-06 14:32:45.686644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.278 [2024-11-06 14:32:45.686677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.278 [2024-11-06 14:32:45.686685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.278 [2024-11-06 14:32:45.690123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.278 [2024-11-06 14:32:45.690155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.278 [2024-11-06 14:32:45.690164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.278 [2024-11-06 14:32:45.693572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.278 [2024-11-06 14:32:45.693604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.278 [2024-11-06 14:32:45.693612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.278 [2024-11-06 14:32:45.697065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.278 [2024-11-06 14:32:45.697097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.278 [2024-11-06 14:32:45.697106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.278 [2024-11-06 14:32:45.700531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.278 [2024-11-06 14:32:45.700563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.278 [2024-11-06 14:32:45.700571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.278 [2024-11-06 14:32:45.704120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.278 [2024-11-06 14:32:45.704151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.278 [2024-11-06 14:32:45.704160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.278 [2024-11-06 14:32:45.707554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.278 [2024-11-06 14:32:45.707586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.278 [2024-11-06 14:32:45.707594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.278 [2024-11-06 14:32:45.711107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.278 [2024-11-06 14:32:45.711138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.278 [2024-11-06 14:32:45.711147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.278 [2024-11-06 14:32:45.714548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.278 [2024-11-06 14:32:45.714581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.278 [2024-11-06 14:32:45.714590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.278 [2024-11-06 14:32:45.718245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.278 [2024-11-06 14:32:45.718277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.278 [2024-11-06 14:32:45.718286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.278 [2024-11-06 14:32:45.721690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.278 [2024-11-06 14:32:45.721723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.278 [2024-11-06 14:32:45.721731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.278 [2024-11-06 14:32:45.725129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.278 [2024-11-06 14:32:45.725160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.278 [2024-11-06 14:32:45.725168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.278 [2024-11-06 14:32:45.728519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.278 [2024-11-06 14:32:45.728550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.278 [2024-11-06 14:32:45.728559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.278 [2024-11-06 14:32:45.731934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.278 [2024-11-06 14:32:45.731976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.278 [2024-11-06 14:32:45.731985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.278 [2024-11-06 14:32:45.735399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.278 [2024-11-06 14:32:45.735430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.278 [2024-11-06 14:32:45.735438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.278 [2024-11-06 14:32:45.738808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.278 [2024-11-06 14:32:45.738841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.278 [2024-11-06 14:32:45.738849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.278 [2024-11-06 14:32:45.742263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.278 [2024-11-06 14:32:45.742296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.278 [2024-11-06 14:32:45.742305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.278 [2024-11-06 14:32:45.745726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.279 [2024-11-06 14:32:45.745759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.279 [2024-11-06 14:32:45.745768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.279 [2024-11-06 14:32:45.749176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.279 [2024-11-06 14:32:45.749208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.279 [2024-11-06 14:32:45.749217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.279 [2024-11-06 14:32:45.752710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.279 [2024-11-06 14:32:45.752742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.279 [2024-11-06 14:32:45.752751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.279 [2024-11-06 14:32:45.756163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.279 [2024-11-06 14:32:45.756194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.279 [2024-11-06 14:32:45.756203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.279 [2024-11-06 14:32:45.759591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.279 [2024-11-06 14:32:45.759623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.279 [2024-11-06 14:32:45.759631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.279 [2024-11-06 14:32:45.763095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.279 [2024-11-06 14:32:45.763126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.279 [2024-11-06 14:32:45.763134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.279 [2024-11-06 14:32:45.766518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.279 [2024-11-06 14:32:45.766551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.279 [2024-11-06 14:32:45.766559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.279 [2024-11-06 14:32:45.769962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.279 [2024-11-06 14:32:45.769993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.279 [2024-11-06 14:32:45.770001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.279 [2024-11-06 14:32:45.773381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.279 [2024-11-06 14:32:45.773413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.279 [2024-11-06 14:32:45.773422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.279 [2024-11-06 14:32:45.776810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.279 [2024-11-06 14:32:45.776843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.279 [2024-11-06 14:32:45.776851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.279 [2024-11-06 14:32:45.780229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.279 [2024-11-06 14:32:45.780261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.279 [2024-11-06 14:32:45.780277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.279 [2024-11-06 14:32:45.783647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.279 [2024-11-06 14:32:45.783680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.279 [2024-11-06 14:32:45.783688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.279 [2024-11-06 14:32:45.787066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.279 [2024-11-06 14:32:45.787098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.279 [2024-11-06 14:32:45.787106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.279 [2024-11-06 14:32:45.790527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.279 [2024-11-06 14:32:45.790560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.279 [2024-11-06 14:32:45.790568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.279 [2024-11-06 14:32:45.794073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.279 [2024-11-06 14:32:45.794104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.279 [2024-11-06 14:32:45.794113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.279 [2024-11-06 14:32:45.797550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.279 [2024-11-06 14:32:45.797583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.279 [2024-11-06 14:32:45.797592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.279 [2024-11-06 14:32:45.800963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.279 [2024-11-06 14:32:45.800993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.279 [2024-11-06 14:32:45.801001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.279 [2024-11-06 14:32:45.804405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.279 [2024-11-06 14:32:45.804437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.279 [2024-11-06 14:32:45.804445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.279 [2024-11-06 14:32:45.807831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.279 [2024-11-06 14:32:45.807863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.279 [2024-11-06 14:32:45.807871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.279 [2024-11-06 14:32:45.811394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.279 [2024-11-06 14:32:45.811427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.279 [2024-11-06 14:32:45.811435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.279 [2024-11-06 14:32:45.814904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.279 [2024-11-06 14:32:45.814938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.279 [2024-11-06 14:32:45.814958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.279 [2024-11-06 14:32:45.818366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.279 [2024-11-06 14:32:45.818399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.279 [2024-11-06 14:32:45.818408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.279 [2024-11-06 14:32:45.821866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.279 [2024-11-06 14:32:45.821898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.279 [2024-11-06 14:32:45.821907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.279 [2024-11-06 14:32:45.825324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.279 [2024-11-06 14:32:45.825356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.280 [2024-11-06 14:32:45.825365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.280 [2024-11-06 14:32:45.828834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.280 [2024-11-06 14:32:45.828865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.280 [2024-11-06 14:32:45.828873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.280 [2024-11-06 14:32:45.832449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.280 [2024-11-06 14:32:45.832480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.280 [2024-11-06 14:32:45.832489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.280 [2024-11-06 14:32:45.835966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.280 [2024-11-06 14:32:45.835995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.280 [2024-11-06 14:32:45.836003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.280 [2024-11-06 14:32:45.839551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.280 [2024-11-06 14:32:45.839582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.280 [2024-11-06 14:32:45.839591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.280 [2024-11-06 14:32:45.843105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.280 [2024-11-06 14:32:45.843135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.280 [2024-11-06 14:32:45.843143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.280 [2024-11-06 14:32:45.846631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.280 [2024-11-06 14:32:45.846661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.280 [2024-11-06 14:32:45.846670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.280 [2024-11-06 14:32:45.850153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.280 [2024-11-06 14:32:45.850184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.280 [2024-11-06 14:32:45.850193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.280 [2024-11-06 14:32:45.853689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.280 [2024-11-06 14:32:45.853720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.280 [2024-11-06 14:32:45.853729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.280 [2024-11-06 14:32:45.857234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.280 [2024-11-06 14:32:45.857266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.280 [2024-11-06 14:32:45.857274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.280 [2024-11-06 14:32:45.860918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.280 [2024-11-06 14:32:45.860960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.280 [2024-11-06 14:32:45.860968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.280 [2024-11-06 14:32:45.864576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.280 [2024-11-06 14:32:45.864607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.280 [2024-11-06 14:32:45.864616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.280 [2024-11-06 14:32:45.868242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.280 [2024-11-06 14:32:45.868280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.280 [2024-11-06 14:32:45.868288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.280 [2024-11-06 14:32:45.871764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.280 [2024-11-06 14:32:45.871795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.280 [2024-11-06 14:32:45.871804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.280 [2024-11-06 14:32:45.875295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.280 [2024-11-06 14:32:45.875327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.280 [2024-11-06 14:32:45.875335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.280 [2024-11-06 14:32:45.878856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.280 [2024-11-06 14:32:45.878888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.280 [2024-11-06 14:32:45.878897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.280 [2024-11-06 14:32:45.882360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.280 [2024-11-06 14:32:45.882392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.280 [2024-11-06 14:32:45.882400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.280 [2024-11-06 14:32:45.885913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.280 [2024-11-06 14:32:45.885945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.280 [2024-11-06 14:32:45.885964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.280 [2024-11-06 14:32:45.889482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.280 [2024-11-06 14:32:45.889514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.280 [2024-11-06 14:32:45.889523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.280 [2024-11-06 14:32:45.893026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.280 [2024-11-06 14:32:45.893056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.280 [2024-11-06 14:32:45.893072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.280 [2024-11-06 14:32:45.896547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.280 [2024-11-06 14:32:45.896580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.280 [2024-11-06 14:32:45.896588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.280 [2024-11-06 14:32:45.899981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.280 [2024-11-06 14:32:45.900012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.280 [2024-11-06 14:32:45.900020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.280 [2024-11-06 14:32:45.903353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.280 [2024-11-06 14:32:45.903385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.280 [2024-11-06 14:32:45.903394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.280 [2024-11-06 14:32:45.906856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.280 [2024-11-06 14:32:45.906889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.280 [2024-11-06 14:32:45.906897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.540 [2024-11-06 14:32:45.910385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.540 [2024-11-06 14:32:45.910418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.540 [2024-11-06 14:32:45.910426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.540 [2024-11-06 14:32:45.913846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.540 [2024-11-06 14:32:45.913876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.540 [2024-11-06 14:32:45.913884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.540 [2024-11-06 14:32:45.917360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.540 [2024-11-06 14:32:45.917393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.540 [2024-11-06 14:32:45.917401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.540 [2024-11-06 14:32:45.920884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.540 [2024-11-06 14:32:45.920916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.540 [2024-11-06 14:32:45.920924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.540 [2024-11-06 14:32:45.924400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.540 [2024-11-06 14:32:45.924431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.540 [2024-11-06 14:32:45.924440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.540 [2024-11-06 14:32:45.927902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.541 [2024-11-06 14:32:45.927934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.541 [2024-11-06 14:32:45.927943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.541 [2024-11-06 14:32:45.931438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.541 [2024-11-06 14:32:45.931470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.541 [2024-11-06 14:32:45.931479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.541 [2024-11-06 14:32:45.935037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.541 [2024-11-06 14:32:45.935066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.541 [2024-11-06 14:32:45.935075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.541 [2024-11-06 14:32:45.938528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.541 [2024-11-06 14:32:45.938561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.541 [2024-11-06 14:32:45.938569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.541 [2024-11-06 14:32:45.941964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.541 [2024-11-06 14:32:45.941990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.541 [2024-11-06 14:32:45.941999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.541 [2024-11-06 14:32:45.945529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.541 [2024-11-06 14:32:45.945561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.541 [2024-11-06 14:32:45.945569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.541 [2024-11-06 14:32:45.948997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.541 [2024-11-06 14:32:45.949027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.541 [2024-11-06 14:32:45.949035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.541 [2024-11-06 14:32:45.952434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.541 [2024-11-06 14:32:45.952466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.541 [2024-11-06 14:32:45.952474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.541 [2024-11-06 14:32:45.955979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.541 [2024-11-06 14:32:45.956008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.541 [2024-11-06 14:32:45.956017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.541 [2024-11-06 14:32:45.959458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.541 [2024-11-06 14:32:45.959491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.541 [2024-11-06 14:32:45.959499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.541 [2024-11-06 14:32:45.962887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.541 [2024-11-06 14:32:45.962919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.541 [2024-11-06 14:32:45.962928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.541 [2024-11-06 14:32:45.966408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.541 [2024-11-06 14:32:45.966441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.541 [2024-11-06 14:32:45.966449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.541 [2024-11-06 14:32:45.969986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.541 [2024-11-06 14:32:45.970017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.541 [2024-11-06 14:32:45.970025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.541 [2024-11-06 14:32:45.973458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.541 [2024-11-06 14:32:45.973490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.541 [2024-11-06 14:32:45.973499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.541 [2024-11-06 14:32:45.976993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.541 [2024-11-06 14:32:45.977024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.541 [2024-11-06 14:32:45.977033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.541 [2024-11-06 14:32:45.980509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.541 [2024-11-06 14:32:45.980540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.541 [2024-11-06 14:32:45.980549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.541 [2024-11-06 14:32:45.983924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.541 [2024-11-06 14:32:45.983966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.541 [2024-11-06 14:32:45.983975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.541 [2024-11-06 14:32:45.987404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.541 [2024-11-06 14:32:45.987436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.541 [2024-11-06 14:32:45.987444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.541 [2024-11-06 14:32:45.991088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.541 [2024-11-06 14:32:45.991119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.541 [2024-11-06 14:32:45.991128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.541 [2024-11-06 14:32:45.994628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.541 [2024-11-06 14:32:45.994661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.541 [2024-11-06 14:32:45.994669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.541 [2024-11-06 14:32:45.998184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.541 [2024-11-06 14:32:45.998216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.541 [2024-11-06 14:32:45.998225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.541 [2024-11-06 14:32:46.001895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.541 [2024-11-06 14:32:46.001928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.541 [2024-11-06 14:32:46.001937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.541 [2024-11-06 14:32:46.005401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.541 [2024-11-06 14:32:46.005434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.541 [2024-11-06 14:32:46.005442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.541 [2024-11-06 14:32:46.008854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.541 [2024-11-06 14:32:46.008885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.541 [2024-11-06 14:32:46.008894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.541 [2024-11-06 14:32:46.012362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.541 [2024-11-06 14:32:46.012392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.541 [2024-11-06 14:32:46.012401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.541 [2024-11-06 14:32:46.015795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.541 [2024-11-06 14:32:46.015827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.541 [2024-11-06 14:32:46.015836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.541 [2024-11-06 14:32:46.019216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.541 [2024-11-06 14:32:46.019248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.541 [2024-11-06 14:32:46.019257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.541 [2024-11-06 14:32:46.022699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.542 [2024-11-06 14:32:46.022731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.542 [2024-11-06 14:32:46.022739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.542 [2024-11-06 14:32:46.026204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.542 [2024-11-06 14:32:46.026236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.542 [2024-11-06 14:32:46.026245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.542 [2024-11-06 14:32:46.029746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.542 [2024-11-06 14:32:46.029778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.542 [2024-11-06 14:32:46.029787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.542 [2024-11-06 14:32:46.033217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.542 [2024-11-06 14:32:46.033249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.542 [2024-11-06 14:32:46.033257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.542 [2024-11-06 14:32:46.036666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.542 [2024-11-06 14:32:46.036698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.542 [2024-11-06 14:32:46.036706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.542 [2024-11-06 14:32:46.040153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.542 [2024-11-06 14:32:46.040184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.542 [2024-11-06 14:32:46.040216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.542 [2024-11-06 14:32:46.043615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.542 [2024-11-06 14:32:46.043669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.542 [2024-11-06 14:32:46.043678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.542 [2024-11-06 14:32:46.047116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.542 [2024-11-06 14:32:46.047149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.542 [2024-11-06 14:32:46.047157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.542 [2024-11-06 14:32:46.050622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.542 [2024-11-06 14:32:46.050655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.542 [2024-11-06 14:32:46.050663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.542 [2024-11-06 14:32:46.054085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.542 [2024-11-06 14:32:46.054117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.542 [2024-11-06 14:32:46.054126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.542 [2024-11-06 14:32:46.057518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.542 [2024-11-06 14:32:46.057550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.542 [2024-11-06 14:32:46.057559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.542 [2024-11-06 14:32:46.061035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.542 [2024-11-06 14:32:46.061065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.542 [2024-11-06 14:32:46.061074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.542 [2024-11-06 14:32:46.064493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.542 [2024-11-06 14:32:46.064524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.542 [2024-11-06 14:32:46.064532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.542 [2024-11-06 14:32:46.068010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.542 [2024-11-06 14:32:46.068039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.542 [2024-11-06 14:32:46.068048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.542 [2024-11-06 14:32:46.071421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.542 [2024-11-06 14:32:46.071453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.542 [2024-11-06 14:32:46.071461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.542 [2024-11-06 14:32:46.074862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.542 [2024-11-06 14:32:46.074895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.542 [2024-11-06 14:32:46.074903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.542 [2024-11-06 14:32:46.078324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.542 [2024-11-06 14:32:46.078356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.542 [2024-11-06 14:32:46.078364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.542 [2024-11-06 14:32:46.081769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.542 [2024-11-06 14:32:46.081801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.542 [2024-11-06 14:32:46.081809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.542 [2024-11-06 14:32:46.085265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.542 [2024-11-06 14:32:46.085297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.542 [2024-11-06 14:32:46.085305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.542 [2024-11-06 14:32:46.088676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.542 [2024-11-06 14:32:46.088708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.542 [2024-11-06 14:32:46.088716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.542 [2024-11-06 14:32:46.092066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.542 [2024-11-06 14:32:46.092097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.542 [2024-11-06 14:32:46.092105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.542 [2024-11-06 14:32:46.095450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.542 [2024-11-06 14:32:46.095481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.542 [2024-11-06 14:32:46.095490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.542 [2024-11-06 14:32:46.098867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.542 [2024-11-06 14:32:46.098899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.542 [2024-11-06 14:32:46.098907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.542 [2024-11-06 14:32:46.102387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.542 [2024-11-06 14:32:46.102420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.542 [2024-11-06 14:32:46.102428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.542 [2024-11-06 14:32:46.105882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.542 [2024-11-06 14:32:46.105915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.542 [2024-11-06 14:32:46.105923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.542 [2024-11-06 14:32:46.109435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.542 [2024-11-06 14:32:46.109467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.542 [2024-11-06 14:32:46.109475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.542 [2024-11-06 14:32:46.112959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.542 [2024-11-06 14:32:46.112988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.542 [2024-11-06 14:32:46.112997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.542 [2024-11-06 14:32:46.116405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.543 [2024-11-06 14:32:46.116436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.543 [2024-11-06 14:32:46.116444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.543 [2024-11-06 14:32:46.119869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.543 [2024-11-06 14:32:46.119900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.543 [2024-11-06 14:32:46.119908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.543 [2024-11-06 14:32:46.123300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.543 [2024-11-06 14:32:46.123333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.543 [2024-11-06 14:32:46.123341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.543 [2024-11-06 14:32:46.126767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.543 [2024-11-06 14:32:46.126799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.543 [2024-11-06 14:32:46.126807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.543 [2024-11-06 14:32:46.130216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.543 [2024-11-06 14:32:46.130248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.543 [2024-11-06 14:32:46.130256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.543 [2024-11-06 14:32:46.133678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.543 [2024-11-06 14:32:46.133711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.543 [2024-11-06 14:32:46.133719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.543 [2024-11-06 14:32:46.137157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.543 [2024-11-06 14:32:46.137189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.543 [2024-11-06 14:32:46.137197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.543 [2024-11-06 14:32:46.140672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.543 [2024-11-06 14:32:46.140705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.543 [2024-11-06 14:32:46.140714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.543 [2024-11-06 14:32:46.144110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.543 [2024-11-06 14:32:46.144142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.543 [2024-11-06 14:32:46.144150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.543 [2024-11-06 14:32:46.147554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.543 [2024-11-06 14:32:46.147586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.543 [2024-11-06 14:32:46.147594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.543 [2024-11-06 14:32:46.150992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.543 [2024-11-06 14:32:46.151023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.543 [2024-11-06 14:32:46.151031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.543 [2024-11-06 14:32:46.154371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.543 [2024-11-06 14:32:46.154404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.543 [2024-11-06 14:32:46.154412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.543 [2024-11-06 14:32:46.157882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.543 [2024-11-06 14:32:46.157916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.543 [2024-11-06 14:32:46.157924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.543 [2024-11-06 14:32:46.161321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.543 [2024-11-06 14:32:46.161353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.543 [2024-11-06 14:32:46.161362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.543 [2024-11-06 14:32:46.164812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.543 [2024-11-06 14:32:46.164844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.543 [2024-11-06 14:32:46.164852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.543 [2024-11-06 14:32:46.168329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.543 [2024-11-06 14:32:46.168359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.543 [2024-11-06 14:32:46.168368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.543 [2024-11-06 14:32:46.171766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.543 [2024-11-06 14:32:46.171798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.543 [2024-11-06 14:32:46.171807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.802 [2024-11-06 14:32:46.175233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.802 [2024-11-06 14:32:46.175265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.802 [2024-11-06 14:32:46.175273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.802 [2024-11-06 14:32:46.178697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.802 [2024-11-06 14:32:46.178730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.802 [2024-11-06 14:32:46.178738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.802 [2024-11-06 14:32:46.182177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.802 [2024-11-06 14:32:46.182209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.802 [2024-11-06 14:32:46.182218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.802 [2024-11-06 14:32:46.185615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.802 [2024-11-06 14:32:46.185647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.802 [2024-11-06 14:32:46.185656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.802 [2024-11-06 14:32:46.189018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.802 [2024-11-06 14:32:46.189049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.802 [2024-11-06 14:32:46.189057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.802 [2024-11-06 14:32:46.192531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.802 [2024-11-06 14:32:46.192562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.802 [2024-11-06 14:32:46.192571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.802 [2024-11-06 14:32:46.195966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.802 [2024-11-06 14:32:46.195995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.802 [2024-11-06 14:32:46.196003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.802 [2024-11-06 14:32:46.199379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.802 [2024-11-06 14:32:46.199412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.802 [2024-11-06 14:32:46.199420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.802 [2024-11-06 14:32:46.202919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.802 [2024-11-06 14:32:46.202961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.802 [2024-11-06 14:32:46.202970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.802 [2024-11-06 14:32:46.206308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.802 [2024-11-06 14:32:46.206340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.802 [2024-11-06 14:32:46.206348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.802 [2024-11-06 14:32:46.209794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.802 [2024-11-06 14:32:46.209826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.802 [2024-11-06 14:32:46.209835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.802 [2024-11-06 14:32:46.213233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.802 [2024-11-06 14:32:46.213265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.802 [2024-11-06 14:32:46.213274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.802 [2024-11-06 14:32:46.216636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.802 [2024-11-06 14:32:46.216668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.802 [2024-11-06 14:32:46.216676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.802 [2024-11-06 14:32:46.220079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.802 [2024-11-06 14:32:46.220110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.802 [2024-11-06 14:32:46.220119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.803 [2024-11-06 14:32:46.223525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.803 [2024-11-06 14:32:46.223558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.803 [2024-11-06 14:32:46.223566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.803 [2024-11-06 14:32:46.226944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.803 [2024-11-06 14:32:46.226987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.803 [2024-11-06 14:32:46.226996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.803 [2024-11-06 14:32:46.230482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.803 [2024-11-06 14:32:46.230514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.803 [2024-11-06 14:32:46.230523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.803 [2024-11-06 14:32:46.233927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.803 [2024-11-06 14:32:46.233968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.803 [2024-11-06 14:32:46.233977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.803 [2024-11-06 14:32:46.237415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.803 [2024-11-06 14:32:46.237448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.803 [2024-11-06 14:32:46.237456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.803 [2024-11-06 14:32:46.240820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.803 [2024-11-06 14:32:46.240852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.803 [2024-11-06 14:32:46.240860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.803 [2024-11-06 14:32:46.244346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.803 [2024-11-06 14:32:46.244376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.803 [2024-11-06 14:32:46.244385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.803 [2024-11-06 14:32:46.247731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.803 [2024-11-06 14:32:46.247764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.803 [2024-11-06 14:32:46.247772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.803 [2024-11-06 14:32:46.251195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.803 [2024-11-06 14:32:46.251227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.803 [2024-11-06 14:32:46.251235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.803 [2024-11-06 14:32:46.254742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.803 [2024-11-06 14:32:46.254775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.803 [2024-11-06 14:32:46.254784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.803 [2024-11-06 14:32:46.258290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.803 [2024-11-06 14:32:46.258322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.803 [2024-11-06 14:32:46.258330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.803 [2024-11-06 14:32:46.261784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.803 [2024-11-06 14:32:46.261817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.803 [2024-11-06 14:32:46.261825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.803 [2024-11-06 14:32:46.265225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.803 [2024-11-06 14:32:46.265257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.803 [2024-11-06 14:32:46.265265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.803 [2024-11-06 14:32:46.268631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.803 [2024-11-06 14:32:46.268664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.803 [2024-11-06 14:32:46.268672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.803 [2024-11-06 14:32:46.272118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.803 [2024-11-06 14:32:46.272151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.803 [2024-11-06 14:32:46.272159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.803 [2024-11-06 14:32:46.275600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.803 [2024-11-06 14:32:46.275632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.803 [2024-11-06 14:32:46.275640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.803 [2024-11-06 14:32:46.279023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.803 [2024-11-06 14:32:46.279054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.803 [2024-11-06 14:32:46.279062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.803 [2024-11-06 14:32:46.282656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.803 [2024-11-06 14:32:46.282687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.803 [2024-11-06 14:32:46.282695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.803 [2024-11-06 14:32:46.286114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.803 [2024-11-06 14:32:46.286146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.803 [2024-11-06 14:32:46.286155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.803 [2024-11-06 14:32:46.289629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.803 [2024-11-06 14:32:46.289662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.804 [2024-11-06 14:32:46.289671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.804 [2024-11-06 14:32:46.293199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.804 [2024-11-06 14:32:46.293232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.804 [2024-11-06 14:32:46.293241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.804 [2024-11-06 14:32:46.296632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.804 [2024-11-06 14:32:46.296663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.804 [2024-11-06 14:32:46.296671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.804 [2024-11-06 14:32:46.300083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.804 [2024-11-06 14:32:46.300115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.804 [2024-11-06 14:32:46.300124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.804 [2024-11-06 14:32:46.303600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.804 [2024-11-06 14:32:46.303632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.804 [2024-11-06 14:32:46.303640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.804 [2024-11-06 14:32:46.307067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.804 [2024-11-06 14:32:46.307098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.804 [2024-11-06 14:32:46.307106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.804 [2024-11-06 14:32:46.310498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.804 [2024-11-06 14:32:46.310531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.804 [2024-11-06 14:32:46.310539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.804 8971.00 IOPS, 1121.38 MiB/s [2024-11-06T14:32:46.439Z] [2024-11-06 14:32:46.314898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:28:18.804 [2024-11-06 14:32:46.314931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.804 [2024-11-06 14:32:46.314939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.804 00:28:18.804 Latency(us) 00:28:18.804 [2024-11-06T14:32:46.439Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:18.804 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:18.804 nvme0n1 : 2.00 8967.61 1120.95 0.00 0.00 1781.41 504.12 5217.67 00:28:18.804 [2024-11-06T14:32:46.439Z] =================================================================================================================== 00:28:18.804 [2024-11-06T14:32:46.439Z] Total : 8967.61 1120.95 0.00 0.00 1781.41 504.12 5217.67 00:28:18.804 { 00:28:18.804 "results": [ 00:28:18.804 { 00:28:18.804 "job": "nvme0n1", 00:28:18.804 "core_mask": "0x2", 00:28:18.804 "workload": "randread", 00:28:18.804 "status": "finished", 00:28:18.804 "queue_depth": 16, 00:28:18.804 "io_size": 131072, 00:28:18.804 "runtime": 2.002541, 00:28:18.804 "iops": 8967.606655743877, 00:28:18.804 "mibps": 1120.9508319679846, 00:28:18.804 "io_failed": 0, 00:28:18.804 "io_timeout": 0, 00:28:18.804 "avg_latency_us": 1781.4099730139556, 00:28:18.804 "min_latency_us": 504.12307692307695, 00:28:18.804 "max_latency_us": 5217.673846153846 00:28:18.804 } 00:28:18.804 ], 00:28:18.804 "core_count": 1 00:28:18.804 } 00:28:18.804 14:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:18.804 14:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:18.804 14:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:18.804 14:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:18.804 | .driver_specific 00:28:18.804 | .nvme_error 00:28:18.804 | .status_code 00:28:18.804 | .command_transient_transport_error' 00:28:19.062 14:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 579 > 0 )) 00:28:19.062 14:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 103499 00:28:19.062 14:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 103499 ']' 00:28:19.062 14:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 103499 00:28:19.062 14:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:28:19.062 14:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:19.062 14:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 103499 00:28:19.062 14:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:19.062 killing process with pid 103499 00:28:19.062 14:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:19.062 14:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 103499' 00:28:19.062 14:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 103499 00:28:19.062 Received shutdown signal, test time was about 2.000000 seconds 00:28:19.062 00:28:19.062 Latency(us) 00:28:19.062 [2024-11-06T14:32:46.697Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:19.062 [2024-11-06T14:32:46.697Z] =================================================================================================================== 00:28:19.062 [2024-11-06T14:32:46.697Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:19.062 14:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 103499 00:28:19.629 14:32:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:28:19.629 14:32:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:19.629 14:32:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:19.629 14:32:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:19.629 14:32:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:19.629 14:32:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=103589 00:28:19.629 14:32:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 103589 /var/tmp/bperf.sock 00:28:19.629 14:32:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 103589 ']' 00:28:19.629 14:32:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:28:19.629 14:32:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:19.629 14:32:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:19.629 14:32:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:19.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:19.629 14:32:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:19.629 14:32:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:19.629 [2024-11-06 14:32:47.192282] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:28:19.629 [2024-11-06 14:32:47.192397] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103589 ] 00:28:19.887 [2024-11-06 14:32:47.346683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:19.887 [2024-11-06 14:32:47.425099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:20.453 14:32:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:20.453 14:32:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:28:20.453 14:32:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:20.453 14:32:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:20.711 14:32:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:20.711 14:32:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.711 14:32:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:20.711 14:32:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.711 14:32:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:20.711 14:32:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:20.969 nvme0n1 00:28:20.969 14:32:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:20.969 14:32:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.969 14:32:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:20.969 14:32:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.969 14:32:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:20.969 14:32:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:21.228 Running I/O for 2 seconds... 00:28:21.228 [2024-11-06 14:32:48.636628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf6458 00:28:21.228 [2024-11-06 14:32:48.637533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.228 [2024-11-06 14:32:48.637575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:21.228 [2024-11-06 14:32:48.646115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bde8a8 00:28:21.228 [2024-11-06 14:32:48.646869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:19529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.228 [2024-11-06 14:32:48.646901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:21.228 [2024-11-06 14:32:48.657347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be5a90 00:28:21.228 [2024-11-06 14:32:48.658485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.228 [2024-11-06 14:32:48.658516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:21.228 [2024-11-06 14:32:48.667495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be73e0 00:28:21.228 [2024-11-06 14:32:48.668891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:16127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.228 [2024-11-06 14:32:48.668921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:21.228 [2024-11-06 14:32:48.674338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfb8b8 00:28:21.228 [2024-11-06 14:32:48.674939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:14001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.228 [2024-11-06 14:32:48.674977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:21.228 [2024-11-06 14:32:48.685805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf81e0 00:28:21.228 [2024-11-06 14:32:48.686957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:25346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.228 [2024-11-06 14:32:48.686987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:21.228 [2024-11-06 14:32:48.694682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be38d0 00:28:21.228 [2024-11-06 14:32:48.695513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.228 [2024-11-06 14:32:48.695542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:21.228 [2024-11-06 14:32:48.704050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf46d0 00:28:21.228 [2024-11-06 14:32:48.704967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:17710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.228 [2024-11-06 14:32:48.704995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:21.228 [2024-11-06 14:32:48.713403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec840 00:28:21.228 [2024-11-06 14:32:48.714205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.228 [2024-11-06 14:32:48.714234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:21.228 [2024-11-06 14:32:48.723227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfb8b8 00:28:21.228 [2024-11-06 14:32:48.724028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:17193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.228 [2024-11-06 14:32:48.724056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:21.228 [2024-11-06 14:32:48.732283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be0ea0 00:28:21.228 [2024-11-06 14:32:48.732928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:1327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.228 [2024-11-06 14:32:48.732966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:21.228 [2024-11-06 14:32:48.742257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be1710 00:28:21.228 [2024-11-06 14:32:48.743186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:24661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.228 [2024-11-06 14:32:48.743215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:21.228 [2024-11-06 14:32:48.751599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bea248 00:28:21.228 [2024-11-06 14:32:48.752415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.228 [2024-11-06 14:32:48.752442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:21.228 [2024-11-06 14:32:48.760619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beb328 00:28:21.228 [2024-11-06 14:32:48.761294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:6590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.228 [2024-11-06 14:32:48.761321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:21.228 [2024-11-06 14:32:48.772121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be2c28 00:28:21.228 [2024-11-06 14:32:48.773312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:3787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.228 [2024-11-06 14:32:48.773342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:21.228 [2024-11-06 14:32:48.781270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be84c0 00:28:21.228 [2024-11-06 14:32:48.782444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:9483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.228 [2024-11-06 14:32:48.782473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:21.228 [2024-11-06 14:32:48.790118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf3e60 00:28:21.228 [2024-11-06 14:32:48.790987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:20472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.228 [2024-11-06 14:32:48.791015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:21.228 [2024-11-06 14:32:48.799442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfa3a0 00:28:21.228 [2024-11-06 14:32:48.800390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.228 [2024-11-06 14:32:48.800416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:21.228 [2024-11-06 14:32:48.808650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beb328 00:28:21.228 [2024-11-06 14:32:48.809127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.228 [2024-11-06 14:32:48.809156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:21.228 [2024-11-06 14:32:48.819521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:28:21.228 [2024-11-06 14:32:48.820635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.228 [2024-11-06 14:32:48.820663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:21.228 [2024-11-06 14:32:48.828542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfb048 00:28:21.228 [2024-11-06 14:32:48.829499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:9933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.228 [2024-11-06 14:32:48.829527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:21.228 [2024-11-06 14:32:48.837636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be4de8 00:28:21.228 [2024-11-06 14:32:48.838487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:3486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.228 [2024-11-06 14:32:48.838516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:21.228 [2024-11-06 14:32:48.846651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf6458 00:28:21.228 [2024-11-06 14:32:48.847359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:19615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.228 [2024-11-06 14:32:48.847387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:21.228 [2024-11-06 14:32:48.855406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:28:21.228 [2024-11-06 14:32:48.856004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:23540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.228 [2024-11-06 14:32:48.856032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:21.487 [2024-11-06 14:32:48.866751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec840 00:28:21.487 [2024-11-06 14:32:48.867532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:16844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.487 [2024-11-06 14:32:48.867561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:21.487 [2024-11-06 14:32:48.876053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf6020 00:28:21.487 [2024-11-06 14:32:48.876699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:22104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.487 [2024-11-06 14:32:48.876729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:21.487 [2024-11-06 14:32:48.884959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfa3a0 00:28:21.487 [2024-11-06 14:32:48.885684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.487 [2024-11-06 14:32:48.885713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:21.487 [2024-11-06 14:32:48.896687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bdf988 00:28:21.487 [2024-11-06 14:32:48.898126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:14277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.487 [2024-11-06 14:32:48.898154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:21.487 [2024-11-06 14:32:48.903632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beff18 00:28:21.487 [2024-11-06 14:32:48.904273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:5818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.487 [2024-11-06 14:32:48.904300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:21.487 [2024-11-06 14:32:48.915254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfda78 00:28:21.487 [2024-11-06 14:32:48.916438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:23646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.487 [2024-11-06 14:32:48.916465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:21.487 [2024-11-06 14:32:48.924630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be2c28 00:28:21.487 [2024-11-06 14:32:48.925323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:18370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.487 [2024-11-06 14:32:48.925351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:21.487 [2024-11-06 14:32:48.933671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beaef0 00:28:21.487 [2024-11-06 14:32:48.934177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:4195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.487 [2024-11-06 14:32:48.934205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:21.487 [2024-11-06 14:32:48.944301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bde8a8 00:28:21.487 [2024-11-06 14:32:48.945468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:7308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.487 [2024-11-06 14:32:48.945497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:21.487 [2024-11-06 14:32:48.953382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be4de8 00:28:21.487 [2024-11-06 14:32:48.954442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.487 [2024-11-06 14:32:48.954471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:21.487 [2024-11-06 14:32:48.962406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bed4e8 00:28:21.487 [2024-11-06 14:32:48.963319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.487 [2024-11-06 14:32:48.963347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:21.487 [2024-11-06 14:32:48.971470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be88f8 00:28:21.487 [2024-11-06 14:32:48.972295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:14038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.487 [2024-11-06 14:32:48.972322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:21.487 [2024-11-06 14:32:48.982698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be9e10 00:28:21.487 [2024-11-06 14:32:48.983876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.487 [2024-11-06 14:32:48.983906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:21.487 [2024-11-06 14:32:48.991790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be5ec8 00:28:21.487 [2024-11-06 14:32:48.992876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:20155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.487 [2024-11-06 14:32:48.992905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:21.487 [2024-11-06 14:32:49.000828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf3a28 00:28:21.487 [2024-11-06 14:32:49.001755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:13774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.487 [2024-11-06 14:32:49.001783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:21.487 [2024-11-06 14:32:49.012262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be12d8 00:28:21.487 [2024-11-06 14:32:49.013872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:14294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.487 [2024-11-06 14:32:49.013901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.487 [2024-11-06 14:32:49.019027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be7c50 00:28:21.487 [2024-11-06 14:32:49.019844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.487 [2024-11-06 14:32:49.019872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:21.487 [2024-11-06 14:32:49.030497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be38d0 00:28:21.487 [2024-11-06 14:32:49.031867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:2048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.487 [2024-11-06 14:32:49.031896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:21.487 [2024-11-06 14:32:49.037299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf5be8 00:28:21.487 [2024-11-06 14:32:49.037891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.487 [2024-11-06 14:32:49.037920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:21.487 [2024-11-06 14:32:49.048018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf0788 00:28:21.487 [2024-11-06 14:32:49.049456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:24010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.487 [2024-11-06 14:32:49.049485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:21.487 [2024-11-06 14:32:49.057886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf46d0 00:28:21.487 [2024-11-06 14:32:49.058736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:3235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.487 [2024-11-06 14:32:49.058765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:21.488 [2024-11-06 14:32:49.066980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be9e10 00:28:21.488 [2024-11-06 14:32:49.067736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.488 [2024-11-06 14:32:49.067764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:21.488 [2024-11-06 14:32:49.076027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be7c50 00:28:21.488 [2024-11-06 14:32:49.076623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.488 [2024-11-06 14:32:49.076661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:21.488 [2024-11-06 14:32:49.088030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfc998 00:28:21.488 [2024-11-06 14:32:49.089424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:19144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.488 [2024-11-06 14:32:49.089452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:21.488 [2024-11-06 14:32:49.095880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be84c0 00:28:21.488 [2024-11-06 14:32:49.096551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.488 [2024-11-06 14:32:49.096580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:21.488 [2024-11-06 14:32:49.105105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be4578 00:28:21.488 [2024-11-06 14:32:49.105568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.488 [2024-11-06 14:32:49.105597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:21.488 [2024-11-06 14:32:49.115647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8618 00:28:21.488 [2024-11-06 14:32:49.116796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:13885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.488 [2024-11-06 14:32:49.116825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:21.746 [2024-11-06 14:32:49.124762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bdf550 00:28:21.746 [2024-11-06 14:32:49.125794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.746 [2024-11-06 14:32:49.125822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:21.746 [2024-11-06 14:32:49.133793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfa3a0 00:28:21.746 [2024-11-06 14:32:49.134688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.746 [2024-11-06 14:32:49.134718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:21.746 [2024-11-06 14:32:49.143413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:28:21.746 [2024-11-06 14:32:49.144160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:3128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.746 [2024-11-06 14:32:49.144188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:21.746 [2024-11-06 14:32:49.151822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf5378 00:28:21.746 [2024-11-06 14:32:49.152491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.746 [2024-11-06 14:32:49.152519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:21.746 [2024-11-06 14:32:49.163230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf0bc0 00:28:21.746 [2024-11-06 14:32:49.164431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:20661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.746 [2024-11-06 14:32:49.164459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:21.746 [2024-11-06 14:32:49.172058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfda78 00:28:21.746 [2024-11-06 14:32:49.172958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:15601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.746 [2024-11-06 14:32:49.172985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:21.746 [2024-11-06 14:32:49.180963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be4140 00:28:21.746 [2024-11-06 14:32:49.181641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.746 [2024-11-06 14:32:49.181670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:21.746 [2024-11-06 14:32:49.190043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beaef0 00:28:21.746 [2024-11-06 14:32:49.190620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:15563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.746 [2024-11-06 14:32:49.190648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:21.746 [2024-11-06 14:32:49.200980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfe2e8 00:28:21.746 [2024-11-06 14:32:49.201717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:3651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.746 [2024-11-06 14:32:49.201746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:21.746 [2024-11-06 14:32:49.210027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfc128 00:28:21.746 [2024-11-06 14:32:49.210579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:21334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.746 [2024-11-06 14:32:49.210608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:21.746 [2024-11-06 14:32:49.219274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfb8b8 00:28:21.746 [2024-11-06 14:32:49.220013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:18385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.746 [2024-11-06 14:32:49.220041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:21.746 [2024-11-06 14:32:49.229101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec840 00:28:21.746 [2024-11-06 14:32:49.230096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:6788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.746 [2024-11-06 14:32:49.230124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:21.746 [2024-11-06 14:32:49.240499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf4298 00:28:21.746 [2024-11-06 14:32:49.242043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:8218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.746 [2024-11-06 14:32:49.242070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:21.746 [2024-11-06 14:32:49.247265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf6020 00:28:21.746 [2024-11-06 14:32:49.248027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:18683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.746 [2024-11-06 14:32:49.248055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:21.746 [2024-11-06 14:32:49.258671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf1ca0 00:28:21.746 [2024-11-06 14:32:49.259978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:7577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.746 [2024-11-06 14:32:49.260006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:21.746 [2024-11-06 14:32:49.267521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf6890 00:28:21.746 [2024-11-06 14:32:49.268517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:6022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.746 [2024-11-06 14:32:49.268545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:21.747 [2024-11-06 14:32:49.276829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf7da8 00:28:21.747 [2024-11-06 14:32:49.277891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:19650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.747 [2024-11-06 14:32:49.277920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:21.747 [2024-11-06 14:32:49.288270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be9e10 00:28:21.747 [2024-11-06 14:32:49.289877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.747 [2024-11-06 14:32:49.289906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.747 [2024-11-06 14:32:49.295043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be6fa8 00:28:21.747 [2024-11-06 14:32:49.295863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:4086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.747 [2024-11-06 14:32:49.295891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:21.747 [2024-11-06 14:32:49.306451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bed4e8 00:28:21.747 [2024-11-06 14:32:49.307815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:7246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.747 [2024-11-06 14:32:49.307845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:21.747 [2024-11-06 14:32:49.313230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf46d0 00:28:21.747 [2024-11-06 14:32:49.313813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.747 [2024-11-06 14:32:49.313841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:21.747 [2024-11-06 14:32:49.324628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be4de8 00:28:21.747 [2024-11-06 14:32:49.325756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.747 [2024-11-06 14:32:49.325784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:21.747 [2024-11-06 14:32:49.333815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf4298 00:28:21.747 [2024-11-06 14:32:49.334485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:5690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.747 [2024-11-06 14:32:49.334514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:21.747 [2024-11-06 14:32:49.342991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016befae0 00:28:21.747 [2024-11-06 14:32:49.343525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.747 [2024-11-06 14:32:49.343554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:21.747 [2024-11-06 14:32:49.352331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf6458 00:28:21.747 [2024-11-06 14:32:49.353227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.747 [2024-11-06 14:32:49.353255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:21.747 [2024-11-06 14:32:49.361682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:28:21.747 [2024-11-06 14:32:49.362458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:18868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.747 [2024-11-06 14:32:49.362488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:21.747 [2024-11-06 14:32:49.370728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf3e60 00:28:21.747 [2024-11-06 14:32:49.371366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:23652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.747 [2024-11-06 14:32:49.371395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:22.005 [2024-11-06 14:32:49.382593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfe720 00:28:22.005 [2024-11-06 14:32:49.383913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:16911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.005 [2024-11-06 14:32:49.383941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:22.005 [2024-11-06 14:32:49.390842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfac10 00:28:22.005 [2024-11-06 14:32:49.391422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:19512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.005 [2024-11-06 14:32:49.391450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:22.005 [2024-11-06 14:32:49.400590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beee38 00:28:22.005 [2024-11-06 14:32:49.401531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.005 [2024-11-06 14:32:49.401560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:22.005 [2024-11-06 14:32:49.409796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf6458 00:28:22.005 [2024-11-06 14:32:49.410277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.005 [2024-11-06 14:32:49.410306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:22.005 [2024-11-06 14:32:49.420744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfcdd0 00:28:22.005 [2024-11-06 14:32:49.421851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:13550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.005 [2024-11-06 14:32:49.421881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:22.005 [2024-11-06 14:32:49.429536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf1430 00:28:22.005 [2024-11-06 14:32:49.430441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:15662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.005 [2024-11-06 14:32:49.430470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:22.005 [2024-11-06 14:32:49.438864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bde8a8 00:28:22.005 [2024-11-06 14:32:49.439834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.005 [2024-11-06 14:32:49.439863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:22.005 [2024-11-06 14:32:49.450314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf2d80 00:28:22.005 [2024-11-06 14:32:49.451816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.005 [2024-11-06 14:32:49.451846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:22.005 [2024-11-06 14:32:49.457112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bea248 00:28:22.005 [2024-11-06 14:32:49.457854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:12129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.005 [2024-11-06 14:32:49.457881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:22.006 [2024-11-06 14:32:49.466467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be0630 00:28:22.006 [2024-11-06 14:32:49.467098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.006 [2024-11-06 14:32:49.467128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:22.006 [2024-11-06 14:32:49.475968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bef270 00:28:22.006 [2024-11-06 14:32:49.476585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:21984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.006 [2024-11-06 14:32:49.476613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:22.006 [2024-11-06 14:32:49.487254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be88f8 00:28:22.006 [2024-11-06 14:32:49.488051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:10892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.006 [2024-11-06 14:32:49.488079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:22.006 [2024-11-06 14:32:49.496537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bdf988 00:28:22.006 [2024-11-06 14:32:49.497130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:11987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.006 [2024-11-06 14:32:49.497159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:22.006 [2024-11-06 14:32:49.505622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf57b0 00:28:22.006 [2024-11-06 14:32:49.506151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.006 [2024-11-06 14:32:49.506178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:22.006 [2024-11-06 14:32:49.516538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfda78 00:28:22.006 [2024-11-06 14:32:49.517689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:20126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.006 [2024-11-06 14:32:49.517718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:22.006 [2024-11-06 14:32:49.525562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf57b0 00:28:22.006 [2024-11-06 14:32:49.526567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:11270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.006 [2024-11-06 14:32:49.526596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:22.006 [2024-11-06 14:32:49.534644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be3498 00:28:22.006 [2024-11-06 14:32:49.535557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:21217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.006 [2024-11-06 14:32:49.535586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:22.006 [2024-11-06 14:32:49.545078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beff18 00:28:22.006 [2024-11-06 14:32:49.546351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.006 [2024-11-06 14:32:49.546379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:22.006 [2024-11-06 14:32:49.553990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec840 00:28:22.006 [2024-11-06 14:32:49.554992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.006 [2024-11-06 14:32:49.555020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:22.006 [2024-11-06 14:32:49.563307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be8088 00:28:22.006 [2024-11-06 14:32:49.564375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:16821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.006 [2024-11-06 14:32:49.564403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:22.006 [2024-11-06 14:32:49.572558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beea00 00:28:22.006 [2024-11-06 14:32:49.573165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.006 [2024-11-06 14:32:49.573194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:22.006 [2024-11-06 14:32:49.582014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf1430 00:28:22.006 [2024-11-06 14:32:49.582598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:1984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.006 [2024-11-06 14:32:49.582627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:22.006 [2024-11-06 14:32:49.592933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfdeb0 00:28:22.006 [2024-11-06 14:32:49.594157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.006 [2024-11-06 14:32:49.594185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.006 [2024-11-06 14:32:49.601970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beaef0 00:28:22.006 [2024-11-06 14:32:49.603037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:11701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.006 [2024-11-06 14:32:49.603065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:22.006 [2024-11-06 14:32:49.611066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bee5c8 00:28:22.006 [2024-11-06 14:32:49.612027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:16938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.006 [2024-11-06 14:32:49.612055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:22.006 [2024-11-06 14:32:49.620088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf1ca0 00:28:22.006 [2024-11-06 14:32:49.621879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:18432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.006 [2024-11-06 14:32:49.621907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:22.006 26474.00 IOPS, 103.41 MiB/s [2024-11-06T14:32:49.641Z] [2024-11-06 14:32:49.631509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:28:22.006 [2024-11-06 14:32:49.633030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:20531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.006 [2024-11-06 14:32:49.633057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:22.264 [2024-11-06 14:32:49.639624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be7818 00:28:22.264 [2024-11-06 14:32:49.640393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.264 [2024-11-06 14:32:49.640420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:22.264 [2024-11-06 14:32:49.648854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfdeb0 00:28:22.264 [2024-11-06 14:32:49.649424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.264 [2024-11-06 14:32:49.649452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:22.264 [2024-11-06 14:32:49.657902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfdeb0 00:28:22.264 [2024-11-06 14:32:49.658408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:15660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.264 [2024-11-06 14:32:49.658437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:22.264 [2024-11-06 14:32:49.668503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec840 00:28:22.264 [2024-11-06 14:32:49.669647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:21745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.264 [2024-11-06 14:32:49.669673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:22.264 [2024-11-06 14:32:49.677996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf6020 00:28:22.264 [2024-11-06 14:32:49.679114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:17563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.264 [2024-11-06 14:32:49.679142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:22.264 [2024-11-06 14:32:49.687812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beff18 00:28:22.264 [2024-11-06 14:32:49.688945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:14810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.264 [2024-11-06 14:32:49.688993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:22.264 [2024-11-06 14:32:49.696578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be0ea0 00:28:22.264 [2024-11-06 14:32:49.697565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.264 [2024-11-06 14:32:49.697593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:22.264 [2024-11-06 14:32:49.706499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be9168 00:28:22.264 [2024-11-06 14:32:49.707287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:16118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.264 [2024-11-06 14:32:49.707316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:22.264 [2024-11-06 14:32:49.717085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfa3a0 00:28:22.264 [2024-11-06 14:32:49.718500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:14640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.264 [2024-11-06 14:32:49.718528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:22.264 [2024-11-06 14:32:49.726105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beee38 00:28:22.264 [2024-11-06 14:32:49.727371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.264 [2024-11-06 14:32:49.727401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:22.264 [2024-11-06 14:32:49.735179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be88f8 00:28:22.264 [2024-11-06 14:32:49.736344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:7254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.264 [2024-11-06 14:32:49.736372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:22.264 [2024-11-06 14:32:49.744216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bdece0 00:28:22.264 [2024-11-06 14:32:49.745240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.264 [2024-11-06 14:32:49.745269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:22.264 [2024-11-06 14:32:49.753732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bed920 00:28:22.264 [2024-11-06 14:32:49.754425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:10101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.264 [2024-11-06 14:32:49.754454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:22.264 [2024-11-06 14:32:49.762335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfd640 00:28:22.264 [2024-11-06 14:32:49.763123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:16778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.264 [2024-11-06 14:32:49.763151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:22.264 [2024-11-06 14:32:49.773762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:28:22.264 [2024-11-06 14:32:49.775099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:15250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.264 [2024-11-06 14:32:49.775127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:22.264 [2024-11-06 14:32:49.782608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfe720 00:28:22.264 [2024-11-06 14:32:49.783629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:9114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.264 [2024-11-06 14:32:49.783658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.264 [2024-11-06 14:32:49.791522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bff3c8 00:28:22.264 [2024-11-06 14:32:49.792346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.264 [2024-11-06 14:32:49.792375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:22.264 [2024-11-06 14:32:49.800610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be49b0 00:28:22.264 [2024-11-06 14:32:49.801335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:4758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.264 [2024-11-06 14:32:49.801365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.264 [2024-11-06 14:32:49.812586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be3d08 00:28:22.264 [2024-11-06 14:32:49.814094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.264 [2024-11-06 14:32:49.814122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.264 [2024-11-06 14:32:49.821600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf96f8 00:28:22.264 [2024-11-06 14:32:49.822963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:7055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.264 [2024-11-06 14:32:49.822989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:22.264 [2024-11-06 14:32:49.828698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bdf550 00:28:22.264 [2024-11-06 14:32:49.829417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:9216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.264 [2024-11-06 14:32:49.829445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:22.264 [2024-11-06 14:32:49.840100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be5658 00:28:22.264 [2024-11-06 14:32:49.841363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.264 [2024-11-06 14:32:49.841392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:22.264 [2024-11-06 14:32:49.848960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bea680 00:28:22.264 [2024-11-06 14:32:49.849908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:32 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.264 [2024-11-06 14:32:49.849938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:22.265 [2024-11-06 14:32:49.857852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf57b0 00:28:22.265 [2024-11-06 14:32:49.858611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:1914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.265 [2024-11-06 14:32:49.858639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:22.265 [2024-11-06 14:32:49.867366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be8d30 00:28:22.265 [2024-11-06 14:32:49.868109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.265 [2024-11-06 14:32:49.868137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:22.265 [2024-11-06 14:32:49.876444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bde8a8 00:28:22.265 [2024-11-06 14:32:49.877088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:20125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.265 [2024-11-06 14:32:49.877116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:22.265 [2024-11-06 14:32:49.885976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be49b0 00:28:22.265 [2024-11-06 14:32:49.886608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:21200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.265 [2024-11-06 14:32:49.886637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:22.523 [2024-11-06 14:32:49.897988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beee38 00:28:22.523 [2024-11-06 14:32:49.899442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:22016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.523 [2024-11-06 14:32:49.899472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:22.523 [2024-11-06 14:32:49.905980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bde8a8 00:28:22.523 [2024-11-06 14:32:49.906607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.523 [2024-11-06 14:32:49.906636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:22.523 [2024-11-06 14:32:49.915282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beb328 00:28:22.523 [2024-11-06 14:32:49.915846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:2657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.523 [2024-11-06 14:32:49.915875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:22.523 [2024-11-06 14:32:49.926371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfb8b8 00:28:22.523 [2024-11-06 14:32:49.927552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:23426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.523 [2024-11-06 14:32:49.927580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:22.523 [2024-11-06 14:32:49.935404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf0350 00:28:22.523 [2024-11-06 14:32:49.936443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.523 [2024-11-06 14:32:49.936471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:22.523 [2024-11-06 14:32:49.944519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be3498 00:28:22.523 [2024-11-06 14:32:49.945441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:13015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.523 [2024-11-06 14:32:49.945469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:22.523 [2024-11-06 14:32:49.953523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfef90 00:28:22.523 [2024-11-06 14:32:49.954300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:25293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.523 [2024-11-06 14:32:49.954330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:22.523 [2024-11-06 14:32:49.962674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf1868 00:28:22.523 [2024-11-06 14:32:49.963350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.523 [2024-11-06 14:32:49.963378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:22.523 [2024-11-06 14:32:49.974656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf0bc0 00:28:22.523 [2024-11-06 14:32:49.976114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:18775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.523 [2024-11-06 14:32:49.976143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:22.523 [2024-11-06 14:32:49.982436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf6890 00:28:22.523 [2024-11-06 14:32:49.983089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:13333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.523 [2024-11-06 14:32:49.983116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:22.523 [2024-11-06 14:32:49.991767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bdece0 00:28:22.523 [2024-11-06 14:32:49.992365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:17854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.523 [2024-11-06 14:32:49.992393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:22.523 [2024-11-06 14:32:50.002375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:28:22.523 [2024-11-06 14:32:50.003628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:21042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.523 [2024-11-06 14:32:50.003659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:22.523 [2024-11-06 14:32:50.011887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfcdd0 00:28:22.523 [2024-11-06 14:32:50.012945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:22881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.523 [2024-11-06 14:32:50.012985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:22.523 [2024-11-06 14:32:50.021084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bef6a8 00:28:22.523 [2024-11-06 14:32:50.021879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:12384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.523 [2024-11-06 14:32:50.021908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:22.523 [2024-11-06 14:32:50.030198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beb328 00:28:22.523 [2024-11-06 14:32:50.030903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:10561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.523 [2024-11-06 14:32:50.030933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:22.523 [2024-11-06 14:32:50.042726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be0ea0 00:28:22.523 [2024-11-06 14:32:50.044320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.523 [2024-11-06 14:32:50.044365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.523 [2024-11-06 14:32:50.051999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfc998 00:28:22.523 [2024-11-06 14:32:50.053364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.523 [2024-11-06 14:32:50.053393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:22.523 [2024-11-06 14:32:50.059112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bde470 00:28:22.523 [2024-11-06 14:32:50.059809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:11800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.523 [2024-11-06 14:32:50.059837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:22.523 [2024-11-06 14:32:50.070521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bef270 00:28:22.523 [2024-11-06 14:32:50.071776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:3470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.523 [2024-11-06 14:32:50.071805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:22.523 [2024-11-06 14:32:50.079869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebfd0 00:28:22.523 [2024-11-06 14:32:50.081022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.523 [2024-11-06 14:32:50.081049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:22.523 [2024-11-06 14:32:50.088902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8a50 00:28:22.523 [2024-11-06 14:32:50.089893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:15647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.523 [2024-11-06 14:32:50.089923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:22.523 [2024-11-06 14:32:50.099652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be1b48 00:28:22.523 [2024-11-06 14:32:50.101070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:9805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.523 [2024-11-06 14:32:50.101098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:22.523 [2024-11-06 14:32:50.106430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf3e60 00:28:22.523 [2024-11-06 14:32:50.107055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.523 [2024-11-06 14:32:50.107083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:22.523 [2024-11-06 14:32:50.117919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be4de8 00:28:22.523 [2024-11-06 14:32:50.118669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.523 [2024-11-06 14:32:50.118698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:22.523 [2024-11-06 14:32:50.127161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8618 00:28:22.523 [2024-11-06 14:32:50.128084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:15660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.523 [2024-11-06 14:32:50.128112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:22.524 [2024-11-06 14:32:50.136322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be8d30 00:28:22.524 [2024-11-06 14:32:50.137214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.524 [2024-11-06 14:32:50.137243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:22.524 [2024-11-06 14:32:50.146359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf96f8 00:28:22.524 [2024-11-06 14:32:50.147509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:5228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.524 [2024-11-06 14:32:50.147538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:22.524 [2024-11-06 14:32:50.155209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfe720 00:28:22.524 [2024-11-06 14:32:50.156060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.524 [2024-11-06 14:32:50.156089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:22.782 [2024-11-06 14:32:50.164548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be4140 00:28:22.782 [2024-11-06 14:32:50.165474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:20363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.782 [2024-11-06 14:32:50.165502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:22.782 [2024-11-06 14:32:50.173886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bddc00 00:28:22.782 [2024-11-06 14:32:50.174699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.782 [2024-11-06 14:32:50.174728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:22.782 [2024-11-06 14:32:50.182919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beff18 00:28:22.782 [2024-11-06 14:32:50.183577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:15458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.782 [2024-11-06 14:32:50.183606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:22.782 [2024-11-06 14:32:50.194165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bedd58 00:28:22.782 [2024-11-06 14:32:50.194946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.782 [2024-11-06 14:32:50.194985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.782 [2024-11-06 14:32:50.203199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfdeb0 00:28:22.782 [2024-11-06 14:32:50.203914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:15455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.782 [2024-11-06 14:32:50.203943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:22.782 [2024-11-06 14:32:50.212668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfd640 00:28:22.782 [2024-11-06 14:32:50.213393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.782 [2024-11-06 14:32:50.213422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:22.782 [2024-11-06 14:32:50.221868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebfd0 00:28:22.782 [2024-11-06 14:32:50.222465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:17394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.782 [2024-11-06 14:32:50.222494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:22.782 [2024-11-06 14:32:50.231188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be6b70 00:28:22.782 [2024-11-06 14:32:50.232119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:17398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.782 [2024-11-06 14:32:50.232149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:22.782 [2024-11-06 14:32:50.242585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be6738 00:28:22.782 [2024-11-06 14:32:50.244060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:6398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.782 [2024-11-06 14:32:50.244089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:22.782 [2024-11-06 14:32:50.249359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be6738 00:28:22.782 [2024-11-06 14:32:50.250054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:9018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.782 [2024-11-06 14:32:50.250082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:22.782 [2024-11-06 14:32:50.260756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be6b70 00:28:22.782 [2024-11-06 14:32:50.262006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:5192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.782 [2024-11-06 14:32:50.262032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:22.782 [2024-11-06 14:32:50.270091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf7da8 00:28:22.782 [2024-11-06 14:32:50.271220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.782 [2024-11-06 14:32:50.271249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:22.782 [2024-11-06 14:32:50.279148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:28:22.782 [2024-11-06 14:32:50.280058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.782 [2024-11-06 14:32:50.280087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:22.782 [2024-11-06 14:32:50.288481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be0a68 00:28:22.782 [2024-11-06 14:32:50.289472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.782 [2024-11-06 14:32:50.289501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:22.782 [2024-11-06 14:32:50.299891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf1ca0 00:28:22.782 [2024-11-06 14:32:50.301434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.782 [2024-11-06 14:32:50.301471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:22.782 [2024-11-06 14:32:50.306684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:28:22.782 [2024-11-06 14:32:50.307450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.782 [2024-11-06 14:32:50.307478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:22.782 [2024-11-06 14:32:50.317778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be49b0 00:28:22.782 [2024-11-06 14:32:50.318938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:14550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.782 [2024-11-06 14:32:50.318972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:22.782 [2024-11-06 14:32:50.326697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf2d80 00:28:22.782 [2024-11-06 14:32:50.327535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:23989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.782 [2024-11-06 14:32:50.327565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:22.782 [2024-11-06 14:32:50.336025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfd208 00:28:22.782 [2024-11-06 14:32:50.336956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.782 [2024-11-06 14:32:50.336984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:22.782 [2024-11-06 14:32:50.347425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be6300 00:28:22.782 [2024-11-06 14:32:50.348895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:23646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.783 [2024-11-06 14:32:50.348924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:22.783 [2024-11-06 14:32:50.354225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:28:22.783 [2024-11-06 14:32:50.354906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.783 [2024-11-06 14:32:50.354935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:22.783 [2024-11-06 14:32:50.365228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf0ff8 00:28:22.783 [2024-11-06 14:32:50.366182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.783 [2024-11-06 14:32:50.366210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:22.783 [2024-11-06 14:32:50.374327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bde8a8 00:28:22.783 [2024-11-06 14:32:50.375187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:20077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.783 [2024-11-06 14:32:50.375216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:22.783 [2024-11-06 14:32:50.384741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfb8b8 00:28:22.783 [2024-11-06 14:32:50.385970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:18491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.783 [2024-11-06 14:32:50.385997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:22.783 [2024-11-06 14:32:50.393648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be1710 00:28:22.783 [2024-11-06 14:32:50.394603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:6511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.783 [2024-11-06 14:32:50.394633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:22.783 [2024-11-06 14:32:50.402558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be6300 00:28:22.783 [2024-11-06 14:32:50.403303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:9089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.783 [2024-11-06 14:32:50.403332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:22.783 [2024-11-06 14:32:50.411635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf2510 00:28:22.783 [2024-11-06 14:32:50.412287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.783 [2024-11-06 14:32:50.412314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:23.042 [2024-11-06 14:32:50.422522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf2510 00:28:23.042 [2024-11-06 14:32:50.423326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:16250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.042 [2024-11-06 14:32:50.423355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:23.042 [2024-11-06 14:32:50.431554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf2948 00:28:23.042 [2024-11-06 14:32:50.432172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:24179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.042 [2024-11-06 14:32:50.432200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:23.042 [2024-11-06 14:32:50.440630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be84c0 00:28:23.042 [2024-11-06 14:32:50.441186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.042 [2024-11-06 14:32:50.441214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:23.042 [2024-11-06 14:32:50.451519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf7538 00:28:23.042 [2024-11-06 14:32:50.452703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:16282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.042 [2024-11-06 14:32:50.452732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:23.042 [2024-11-06 14:32:50.460221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016becc78 00:28:23.042 [2024-11-06 14:32:50.461268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.042 [2024-11-06 14:32:50.461297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:23.042 [2024-11-06 14:32:50.469314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf92c0 00:28:23.042 [2024-11-06 14:32:50.470250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:22566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.042 [2024-11-06 14:32:50.470279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:23.042 [2024-11-06 14:32:50.478798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be5a90 00:28:23.042 [2024-11-06 14:32:50.479715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.042 [2024-11-06 14:32:50.479744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:23.042 [2024-11-06 14:32:50.487830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfcdd0 00:28:23.042 [2024-11-06 14:32:50.488608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:14241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.042 [2024-11-06 14:32:50.488636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:23.042 [2024-11-06 14:32:50.496915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bdece0 00:28:23.042 [2024-11-06 14:32:50.497579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:6443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.042 [2024-11-06 14:32:50.497608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:23.042 [2024-11-06 14:32:50.508211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bdfdc0 00:28:23.042 [2024-11-06 14:32:50.509061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:2090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.042 [2024-11-06 14:32:50.509089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:23.042 [2024-11-06 14:32:50.517271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf7538 00:28:23.042 [2024-11-06 14:32:50.517906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:20483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.042 [2024-11-06 14:32:50.517935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:23.042 [2024-11-06 14:32:50.527834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf92c0 00:28:23.042 [2024-11-06 14:32:50.529160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:12767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.042 [2024-11-06 14:32:50.529187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:23.042 [2024-11-06 14:32:50.534944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bea680 00:28:23.042 [2024-11-06 14:32:50.535613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:20573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.042 [2024-11-06 14:32:50.535642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:23.042 [2024-11-06 14:32:50.546340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bde470 00:28:23.042 [2024-11-06 14:32:50.547564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.042 [2024-11-06 14:32:50.547593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:23.042 [2024-11-06 14:32:50.555214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf0ff8 00:28:23.042 [2024-11-06 14:32:50.556121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:9472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.042 [2024-11-06 14:32:50.556150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:23.042 [2024-11-06 14:32:50.564546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebfd0 00:28:23.042 [2024-11-06 14:32:50.565519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:9470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.042 [2024-11-06 14:32:50.565546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:23.042 [2024-11-06 14:32:50.576080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8618 00:28:23.042 [2024-11-06 14:32:50.577605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:3182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.042 [2024-11-06 14:32:50.577633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:23.042 [2024-11-06 14:32:50.582887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfd640 00:28:23.042 [2024-11-06 14:32:50.583629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.042 [2024-11-06 14:32:50.583658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:23.042 [2024-11-06 14:32:50.594408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be88f8 00:28:23.043 [2024-11-06 14:32:50.595733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.043 [2024-11-06 14:32:50.595762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:23.043 [2024-11-06 14:32:50.604011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf3a28 00:28:23.043 [2024-11-06 14:32:50.605196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:6619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.043 [2024-11-06 14:32:50.605224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:23.043 [2024-11-06 14:32:50.613860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be9e10 00:28:23.043 [2024-11-06 14:32:50.615041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.043 [2024-11-06 14:32:50.615068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:23.043 [2024-11-06 14:32:50.622128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be88f8 00:28:23.043 26620.50 IOPS, 103.99 MiB/s [2024-11-06T14:32:50.678Z] [2024-11-06 14:32:50.622894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.043 [2024-11-06 14:32:50.622919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:23.043 00:28:23.043 Latency(us) 00:28:23.043 [2024-11-06T14:32:50.678Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:23.043 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:23.043 nvme0n1 : 2.00 26621.00 103.99 0.00 0.00 4801.79 2432.39 13006.38 00:28:23.043 [2024-11-06T14:32:50.678Z] =================================================================================================================== 00:28:23.043 [2024-11-06T14:32:50.678Z] Total : 26621.00 103.99 0.00 0.00 4801.79 2432.39 13006.38 00:28:23.043 { 00:28:23.043 "results": [ 00:28:23.043 { 00:28:23.043 "job": "nvme0n1", 00:28:23.043 "core_mask": "0x2", 00:28:23.043 "workload": "randwrite", 00:28:23.043 "status": "finished", 00:28:23.043 "queue_depth": 128, 00:28:23.043 "io_size": 4096, 00:28:23.043 "runtime": 2.004771, 00:28:23.043 "iops": 26620.995614960513, 00:28:23.043 "mibps": 103.9882641209395, 00:28:23.043 "io_failed": 0, 00:28:23.043 "io_timeout": 0, 00:28:23.043 "avg_latency_us": 4801.794033470885, 00:28:23.043 "min_latency_us": 2432.393846153846, 00:28:23.043 "max_latency_us": 13006.375384615385 00:28:23.043 } 00:28:23.043 ], 00:28:23.043 "core_count": 1 00:28:23.043 } 00:28:23.043 14:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:23.043 14:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:23.043 14:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:23.043 14:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:23.043 | .driver_specific 00:28:23.043 | .nvme_error 00:28:23.043 | .status_code 00:28:23.043 | .command_transient_transport_error' 00:28:23.301 14:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 209 > 0 )) 00:28:23.301 14:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 103589 00:28:23.301 14:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 103589 ']' 00:28:23.301 14:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 103589 00:28:23.301 14:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:28:23.301 14:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:23.301 14:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 103589 00:28:23.301 14:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:23.301 killing process with pid 103589 00:28:23.301 14:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:23.301 14:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 103589' 00:28:23.301 14:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 103589 00:28:23.301 Received shutdown signal, test time was about 2.000000 seconds 00:28:23.301 00:28:23.301 Latency(us) 00:28:23.301 [2024-11-06T14:32:50.936Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:23.301 [2024-11-06T14:32:50.936Z] =================================================================================================================== 00:28:23.301 [2024-11-06T14:32:50.936Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:23.301 14:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 103589 00:28:23.867 14:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:28:23.867 14:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:23.867 14:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:23.867 14:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:23.867 14:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:23.867 14:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=103676 00:28:23.867 14:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:28:23.867 14:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 103676 /var/tmp/bperf.sock 00:28:23.867 14:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 103676 ']' 00:28:23.867 14:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:23.867 14:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:23.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:23.867 14:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:23.867 14:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:23.867 14:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:23.867 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:23.867 Zero copy mechanism will not be used. 00:28:23.867 [2024-11-06 14:32:51.391450] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:28:23.867 [2024-11-06 14:32:51.391566] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103676 ] 00:28:24.126 [2024-11-06 14:32:51.546455] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:24.126 [2024-11-06 14:32:51.622687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:24.691 14:32:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:24.691 14:32:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:28:24.691 14:32:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:24.691 14:32:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:24.949 14:32:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:24.949 14:32:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.949 14:32:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:24.949 14:32:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.949 14:32:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:24.949 14:32:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:25.208 nvme0n1 00:28:25.208 14:32:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:25.208 14:32:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.208 14:32:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:25.208 14:32:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.208 14:32:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:25.208 14:32:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:25.208 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:25.208 Zero copy mechanism will not be used. 00:28:25.208 Running I/O for 2 seconds... 00:28:25.208 [2024-11-06 14:32:52.825125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.208 [2024-11-06 14:32:52.825354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.208 [2024-11-06 14:32:52.825391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.208 [2024-11-06 14:32:52.828450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.208 [2024-11-06 14:32:52.828675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.208 [2024-11-06 14:32:52.828710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.208 [2024-11-06 14:32:52.831777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.208 [2024-11-06 14:32:52.832038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.208 [2024-11-06 14:32:52.832069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.208 [2024-11-06 14:32:52.835104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.208 [2024-11-06 14:32:52.835316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.208 [2024-11-06 14:32:52.835364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.208 [2024-11-06 14:32:52.838344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.208 [2024-11-06 14:32:52.838573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.208 [2024-11-06 14:32:52.838604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.208 [2024-11-06 14:32:52.841559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.208 [2024-11-06 14:32:52.841770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.208 [2024-11-06 14:32:52.841801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.500 [2024-11-06 14:32:52.844926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.500 [2024-11-06 14:32:52.845191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.500 [2024-11-06 14:32:52.845221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.500 [2024-11-06 14:32:52.848512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.500 [2024-11-06 14:32:52.848726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.500 [2024-11-06 14:32:52.848756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.500 [2024-11-06 14:32:52.851785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.500 [2024-11-06 14:32:52.852011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.500 [2024-11-06 14:32:52.852040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.500 [2024-11-06 14:32:52.855066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.500 [2024-11-06 14:32:52.855274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.500 [2024-11-06 14:32:52.855304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.500 [2024-11-06 14:32:52.858293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.500 [2024-11-06 14:32:52.858511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.500 [2024-11-06 14:32:52.858540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.500 [2024-11-06 14:32:52.861541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.500 [2024-11-06 14:32:52.861734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.500 [2024-11-06 14:32:52.861764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.500 [2024-11-06 14:32:52.864667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.500 [2024-11-06 14:32:52.864860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.500 [2024-11-06 14:32:52.864889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.500 [2024-11-06 14:32:52.867680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.500 [2024-11-06 14:32:52.867867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.501 [2024-11-06 14:32:52.867896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.501 [2024-11-06 14:32:52.870605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.501 [2024-11-06 14:32:52.870790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.501 [2024-11-06 14:32:52.870819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.501 [2024-11-06 14:32:52.873516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.501 [2024-11-06 14:32:52.873733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.501 [2024-11-06 14:32:52.873763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.501 [2024-11-06 14:32:52.876453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.501 [2024-11-06 14:32:52.876634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.501 [2024-11-06 14:32:52.876663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.501 [2024-11-06 14:32:52.879355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.501 [2024-11-06 14:32:52.879538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.501 [2024-11-06 14:32:52.879567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.501 [2024-11-06 14:32:52.882253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.501 [2024-11-06 14:32:52.882438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.501 [2024-11-06 14:32:52.882468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.501 [2024-11-06 14:32:52.885189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.501 [2024-11-06 14:32:52.885371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.501 [2024-11-06 14:32:52.885400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.501 [2024-11-06 14:32:52.888061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.501 [2024-11-06 14:32:52.888242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.501 [2024-11-06 14:32:52.888283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.501 [2024-11-06 14:32:52.890970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.501 [2024-11-06 14:32:52.891153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.501 [2024-11-06 14:32:52.891181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.501 [2024-11-06 14:32:52.893864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.501 [2024-11-06 14:32:52.894060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.501 [2024-11-06 14:32:52.894091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.501 [2024-11-06 14:32:52.896814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.501 [2024-11-06 14:32:52.897030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.501 [2024-11-06 14:32:52.897059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.501 [2024-11-06 14:32:52.899746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.501 [2024-11-06 14:32:52.899929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.501 [2024-11-06 14:32:52.899970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.501 [2024-11-06 14:32:52.902655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.501 [2024-11-06 14:32:52.902840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.501 [2024-11-06 14:32:52.902870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.501 [2024-11-06 14:32:52.905603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.501 [2024-11-06 14:32:52.905788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.501 [2024-11-06 14:32:52.905818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.501 [2024-11-06 14:32:52.908654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.501 [2024-11-06 14:32:52.908874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.501 [2024-11-06 14:32:52.908903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.501 [2024-11-06 14:32:52.911628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.501 [2024-11-06 14:32:52.911812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.501 [2024-11-06 14:32:52.911840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.501 [2024-11-06 14:32:52.914524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.501 [2024-11-06 14:32:52.914707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.501 [2024-11-06 14:32:52.914736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.501 [2024-11-06 14:32:52.917453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.501 [2024-11-06 14:32:52.917636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.501 [2024-11-06 14:32:52.917665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.501 [2024-11-06 14:32:52.920366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.501 [2024-11-06 14:32:52.920547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.501 [2024-11-06 14:32:52.920576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.501 [2024-11-06 14:32:52.923228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.501 [2024-11-06 14:32:52.923411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.501 [2024-11-06 14:32:52.923441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.501 [2024-11-06 14:32:52.926154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.501 [2024-11-06 14:32:52.926338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.501 [2024-11-06 14:32:52.926367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.501 [2024-11-06 14:32:52.929091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.501 [2024-11-06 14:32:52.929273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.501 [2024-11-06 14:32:52.929301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.501 [2024-11-06 14:32:52.931965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.501 [2024-11-06 14:32:52.932148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.501 [2024-11-06 14:32:52.932178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.501 [2024-11-06 14:32:52.934860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.501 [2024-11-06 14:32:52.935054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.501 [2024-11-06 14:32:52.935082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.501 [2024-11-06 14:32:52.937789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.501 [2024-11-06 14:32:52.937988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.501 [2024-11-06 14:32:52.938016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.501 [2024-11-06 14:32:52.940687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.501 [2024-11-06 14:32:52.940870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.501 [2024-11-06 14:32:52.940899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.501 [2024-11-06 14:32:52.943689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.501 [2024-11-06 14:32:52.943872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.501 [2024-11-06 14:32:52.943903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.501 [2024-11-06 14:32:52.946633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.502 [2024-11-06 14:32:52.946819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.502 [2024-11-06 14:32:52.946849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.502 [2024-11-06 14:32:52.949554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.502 [2024-11-06 14:32:52.949737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.502 [2024-11-06 14:32:52.949767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.502 [2024-11-06 14:32:52.952480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.502 [2024-11-06 14:32:52.952661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.502 [2024-11-06 14:32:52.952691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.502 [2024-11-06 14:32:52.955385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.502 [2024-11-06 14:32:52.955569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.502 [2024-11-06 14:32:52.955598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.502 [2024-11-06 14:32:52.958304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.502 [2024-11-06 14:32:52.958486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.502 [2024-11-06 14:32:52.958515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.502 [2024-11-06 14:32:52.961181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.502 [2024-11-06 14:32:52.961363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.502 [2024-11-06 14:32:52.961392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.502 [2024-11-06 14:32:52.964073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.502 [2024-11-06 14:32:52.964255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.502 [2024-11-06 14:32:52.964299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.502 [2024-11-06 14:32:52.966976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.502 [2024-11-06 14:32:52.967157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.502 [2024-11-06 14:32:52.967187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.502 [2024-11-06 14:32:52.969916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.502 [2024-11-06 14:32:52.970115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.502 [2024-11-06 14:32:52.970144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.502 [2024-11-06 14:32:52.972868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.502 [2024-11-06 14:32:52.973061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.502 [2024-11-06 14:32:52.973089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.502 [2024-11-06 14:32:52.975791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.502 [2024-11-06 14:32:52.975987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.502 [2024-11-06 14:32:52.976014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.502 [2024-11-06 14:32:52.978700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.502 [2024-11-06 14:32:52.978882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.502 [2024-11-06 14:32:52.978910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.502 [2024-11-06 14:32:52.981628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.502 [2024-11-06 14:32:52.981811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.502 [2024-11-06 14:32:52.981840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.502 [2024-11-06 14:32:52.984573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.502 [2024-11-06 14:32:52.984755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.502 [2024-11-06 14:32:52.984784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.502 [2024-11-06 14:32:52.987428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.502 [2024-11-06 14:32:52.987613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.502 [2024-11-06 14:32:52.987642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.502 [2024-11-06 14:32:52.990346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.502 [2024-11-06 14:32:52.990564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.502 [2024-11-06 14:32:52.990593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.502 [2024-11-06 14:32:52.993265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.502 [2024-11-06 14:32:52.993448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.502 [2024-11-06 14:32:52.993478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.502 [2024-11-06 14:32:52.996149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.502 [2024-11-06 14:32:52.996345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.502 [2024-11-06 14:32:52.996373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.502 [2024-11-06 14:32:52.999137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.502 [2024-11-06 14:32:52.999320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.502 [2024-11-06 14:32:52.999350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.502 [2024-11-06 14:32:53.002049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.502 [2024-11-06 14:32:53.002233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.502 [2024-11-06 14:32:53.002262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.502 [2024-11-06 14:32:53.004940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.502 [2024-11-06 14:32:53.005131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.502 [2024-11-06 14:32:53.005160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.502 [2024-11-06 14:32:53.007813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.502 [2024-11-06 14:32:53.008012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.502 [2024-11-06 14:32:53.008039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.502 [2024-11-06 14:32:53.010732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.502 [2024-11-06 14:32:53.010915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.502 [2024-11-06 14:32:53.010944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.502 [2024-11-06 14:32:53.013687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.502 [2024-11-06 14:32:53.013872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.502 [2024-11-06 14:32:53.013902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.502 [2024-11-06 14:32:53.016606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.502 [2024-11-06 14:32:53.016778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.502 [2024-11-06 14:32:53.016807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.502 [2024-11-06 14:32:53.019341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.502 [2024-11-06 14:32:53.019502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.502 [2024-11-06 14:32:53.019531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.502 [2024-11-06 14:32:53.022097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.502 [2024-11-06 14:32:53.022245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.502 [2024-11-06 14:32:53.022274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.502 [2024-11-06 14:32:53.024826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.503 [2024-11-06 14:32:53.024988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.503 [2024-11-06 14:32:53.025016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.503 [2024-11-06 14:32:53.027581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.503 [2024-11-06 14:32:53.027732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.503 [2024-11-06 14:32:53.027761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.503 [2024-11-06 14:32:53.030355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.503 [2024-11-06 14:32:53.030505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.503 [2024-11-06 14:32:53.030534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.503 [2024-11-06 14:32:53.033122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.503 [2024-11-06 14:32:53.033271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.503 [2024-11-06 14:32:53.033300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.503 [2024-11-06 14:32:53.035855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.503 [2024-11-06 14:32:53.036043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.503 [2024-11-06 14:32:53.036072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.503 [2024-11-06 14:32:53.038610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.503 [2024-11-06 14:32:53.038765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.503 [2024-11-06 14:32:53.038793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.503 [2024-11-06 14:32:53.041404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.503 [2024-11-06 14:32:53.041579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.503 [2024-11-06 14:32:53.041607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.503 [2024-11-06 14:32:53.044183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.503 [2024-11-06 14:32:53.044349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.503 [2024-11-06 14:32:53.044377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.503 [2024-11-06 14:32:53.046945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.503 [2024-11-06 14:32:53.047104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.503 [2024-11-06 14:32:53.047132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.503 [2024-11-06 14:32:53.049685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.503 [2024-11-06 14:32:53.049836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.503 [2024-11-06 14:32:53.049864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.503 [2024-11-06 14:32:53.052464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.503 [2024-11-06 14:32:53.052612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.503 [2024-11-06 14:32:53.052641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.503 [2024-11-06 14:32:53.055238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.503 [2024-11-06 14:32:53.055391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.503 [2024-11-06 14:32:53.055420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.503 [2024-11-06 14:32:53.058011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.503 [2024-11-06 14:32:53.058158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.503 [2024-11-06 14:32:53.058187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.503 [2024-11-06 14:32:53.060752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.503 [2024-11-06 14:32:53.060902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.503 [2024-11-06 14:32:53.060931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.503 [2024-11-06 14:32:53.063547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.503 [2024-11-06 14:32:53.063697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.503 [2024-11-06 14:32:53.063726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.503 [2024-11-06 14:32:53.066344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.503 [2024-11-06 14:32:53.066493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.503 [2024-11-06 14:32:53.066523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.503 [2024-11-06 14:32:53.069094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.503 [2024-11-06 14:32:53.069243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.503 [2024-11-06 14:32:53.069272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.503 [2024-11-06 14:32:53.071858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.503 [2024-11-06 14:32:53.072017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.503 [2024-11-06 14:32:53.072045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.503 [2024-11-06 14:32:53.074647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.503 [2024-11-06 14:32:53.074796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.503 [2024-11-06 14:32:53.074825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.503 [2024-11-06 14:32:53.077420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.503 [2024-11-06 14:32:53.077571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.503 [2024-11-06 14:32:53.077600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.503 [2024-11-06 14:32:53.080167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.503 [2024-11-06 14:32:53.080329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.503 [2024-11-06 14:32:53.080357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.503 [2024-11-06 14:32:53.082960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.503 [2024-11-06 14:32:53.083109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.503 [2024-11-06 14:32:53.083138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.503 [2024-11-06 14:32:53.085741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.503 [2024-11-06 14:32:53.085889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.503 [2024-11-06 14:32:53.085918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.503 [2024-11-06 14:32:53.088483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.503 [2024-11-06 14:32:53.088632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.503 [2024-11-06 14:32:53.088661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.503 [2024-11-06 14:32:53.091262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.503 [2024-11-06 14:32:53.091416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.503 [2024-11-06 14:32:53.091445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.503 [2024-11-06 14:32:53.094009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.503 [2024-11-06 14:32:53.094160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.503 [2024-11-06 14:32:53.094190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.503 [2024-11-06 14:32:53.096814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.503 [2024-11-06 14:32:53.096975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.503 [2024-11-06 14:32:53.097004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.503 [2024-11-06 14:32:53.099573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.504 [2024-11-06 14:32:53.099724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.504 [2024-11-06 14:32:53.099754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.504 [2024-11-06 14:32:53.102379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.504 [2024-11-06 14:32:53.102532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.504 [2024-11-06 14:32:53.102561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.504 [2024-11-06 14:32:53.105203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.504 [2024-11-06 14:32:53.105355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.504 [2024-11-06 14:32:53.105385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.504 [2024-11-06 14:32:53.108011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.504 [2024-11-06 14:32:53.108163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.504 [2024-11-06 14:32:53.108192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.504 [2024-11-06 14:32:53.110823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.504 [2024-11-06 14:32:53.110988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.504 [2024-11-06 14:32:53.111015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.504 [2024-11-06 14:32:53.113605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.504 [2024-11-06 14:32:53.113753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.504 [2024-11-06 14:32:53.113782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.504 [2024-11-06 14:32:53.116343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.504 [2024-11-06 14:32:53.116500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.504 [2024-11-06 14:32:53.116527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.504 [2024-11-06 14:32:53.119124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.504 [2024-11-06 14:32:53.119275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.504 [2024-11-06 14:32:53.119304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.504 [2024-11-06 14:32:53.121885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.504 [2024-11-06 14:32:53.122043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.504 [2024-11-06 14:32:53.122071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.504 [2024-11-06 14:32:53.124640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.504 [2024-11-06 14:32:53.124795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.504 [2024-11-06 14:32:53.124823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.504 [2024-11-06 14:32:53.127410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.504 [2024-11-06 14:32:53.127560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.504 [2024-11-06 14:32:53.127589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.504 [2024-11-06 14:32:53.130155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.504 [2024-11-06 14:32:53.130304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.504 [2024-11-06 14:32:53.130333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.763 [2024-11-06 14:32:53.132936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.763 [2024-11-06 14:32:53.133094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.763 [2024-11-06 14:32:53.133123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.763 [2024-11-06 14:32:53.135694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.763 [2024-11-06 14:32:53.135850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.763 [2024-11-06 14:32:53.135879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.763 [2024-11-06 14:32:53.138467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.763 [2024-11-06 14:32:53.138618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.763 [2024-11-06 14:32:53.138647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.763 [2024-11-06 14:32:53.141221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.764 [2024-11-06 14:32:53.141357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.764 [2024-11-06 14:32:53.141385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.764 [2024-11-06 14:32:53.144000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.764 [2024-11-06 14:32:53.144136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.764 [2024-11-06 14:32:53.144173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.764 [2024-11-06 14:32:53.146753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.764 [2024-11-06 14:32:53.146896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.764 [2024-11-06 14:32:53.146926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.764 [2024-11-06 14:32:53.149544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.764 [2024-11-06 14:32:53.149682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.764 [2024-11-06 14:32:53.149711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.764 [2024-11-06 14:32:53.152332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.764 [2024-11-06 14:32:53.152469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.764 [2024-11-06 14:32:53.152497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.764 [2024-11-06 14:32:53.155105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.764 [2024-11-06 14:32:53.155241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.764 [2024-11-06 14:32:53.155270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.764 [2024-11-06 14:32:53.157870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.764 [2024-11-06 14:32:53.158019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.764 [2024-11-06 14:32:53.158047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.764 [2024-11-06 14:32:53.160684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.764 [2024-11-06 14:32:53.160822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.764 [2024-11-06 14:32:53.160851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.764 [2024-11-06 14:32:53.163481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.764 [2024-11-06 14:32:53.163621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.764 [2024-11-06 14:32:53.163649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.764 [2024-11-06 14:32:53.166280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.764 [2024-11-06 14:32:53.166417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.764 [2024-11-06 14:32:53.166446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.764 [2024-11-06 14:32:53.169068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.764 [2024-11-06 14:32:53.169207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.764 [2024-11-06 14:32:53.169235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.764 [2024-11-06 14:32:53.171838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.764 [2024-11-06 14:32:53.171995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.764 [2024-11-06 14:32:53.172023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.764 [2024-11-06 14:32:53.174639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.764 [2024-11-06 14:32:53.174786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.764 [2024-11-06 14:32:53.174815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.764 [2024-11-06 14:32:53.177448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.764 [2024-11-06 14:32:53.177595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.764 [2024-11-06 14:32:53.177623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.764 [2024-11-06 14:32:53.180222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.764 [2024-11-06 14:32:53.180368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.764 [2024-11-06 14:32:53.180393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.764 [2024-11-06 14:32:53.183000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.764 [2024-11-06 14:32:53.183137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.764 [2024-11-06 14:32:53.183165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.764 [2024-11-06 14:32:53.185794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.764 [2024-11-06 14:32:53.185933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.764 [2024-11-06 14:32:53.185969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.764 [2024-11-06 14:32:53.188578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.764 [2024-11-06 14:32:53.188715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.764 [2024-11-06 14:32:53.188744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.764 [2024-11-06 14:32:53.191359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.764 [2024-11-06 14:32:53.191498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.764 [2024-11-06 14:32:53.191527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.764 [2024-11-06 14:32:53.194117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.764 [2024-11-06 14:32:53.194254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.764 [2024-11-06 14:32:53.194284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.764 [2024-11-06 14:32:53.196894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.764 [2024-11-06 14:32:53.197042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.764 [2024-11-06 14:32:53.197071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.764 [2024-11-06 14:32:53.199685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.764 [2024-11-06 14:32:53.199822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.764 [2024-11-06 14:32:53.199850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.764 [2024-11-06 14:32:53.202464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.764 [2024-11-06 14:32:53.202602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.764 [2024-11-06 14:32:53.202631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.764 [2024-11-06 14:32:53.205251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.764 [2024-11-06 14:32:53.205389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.764 [2024-11-06 14:32:53.205418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.764 [2024-11-06 14:32:53.208005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.764 [2024-11-06 14:32:53.208141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.764 [2024-11-06 14:32:53.208170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.764 [2024-11-06 14:32:53.210736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.764 [2024-11-06 14:32:53.210884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.764 [2024-11-06 14:32:53.210913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.764 [2024-11-06 14:32:53.213518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.764 [2024-11-06 14:32:53.213658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.764 [2024-11-06 14:32:53.213687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.765 [2024-11-06 14:32:53.216304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.765 [2024-11-06 14:32:53.216451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.765 [2024-11-06 14:32:53.216480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.765 [2024-11-06 14:32:53.219064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.765 [2024-11-06 14:32:53.219211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.765 [2024-11-06 14:32:53.219241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.765 [2024-11-06 14:32:53.221849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.765 [2024-11-06 14:32:53.222000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.765 [2024-11-06 14:32:53.222024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.765 [2024-11-06 14:32:53.224626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.765 [2024-11-06 14:32:53.224766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.765 [2024-11-06 14:32:53.224796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.765 [2024-11-06 14:32:53.227393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.765 [2024-11-06 14:32:53.227542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.765 [2024-11-06 14:32:53.227571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.765 [2024-11-06 14:32:53.230191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.765 [2024-11-06 14:32:53.230340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.765 [2024-11-06 14:32:53.230369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.765 [2024-11-06 14:32:53.233001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.765 [2024-11-06 14:32:53.233150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.765 [2024-11-06 14:32:53.233178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.765 [2024-11-06 14:32:53.235768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.765 [2024-11-06 14:32:53.235906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.765 [2024-11-06 14:32:53.235934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.765 [2024-11-06 14:32:53.238529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.765 [2024-11-06 14:32:53.238669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.765 [2024-11-06 14:32:53.238698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.765 [2024-11-06 14:32:53.241322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.765 [2024-11-06 14:32:53.241461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.765 [2024-11-06 14:32:53.241489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.765 [2024-11-06 14:32:53.244083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.765 [2024-11-06 14:32:53.244221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.765 [2024-11-06 14:32:53.244249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.765 [2024-11-06 14:32:53.246876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.765 [2024-11-06 14:32:53.247027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.765 [2024-11-06 14:32:53.247056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.765 [2024-11-06 14:32:53.249730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.765 [2024-11-06 14:32:53.249885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.765 [2024-11-06 14:32:53.249915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.765 [2024-11-06 14:32:53.252597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.765 [2024-11-06 14:32:53.252746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.765 [2024-11-06 14:32:53.252775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.765 [2024-11-06 14:32:53.255426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.765 [2024-11-06 14:32:53.255578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.765 [2024-11-06 14:32:53.255607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.765 [2024-11-06 14:32:53.258237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.765 [2024-11-06 14:32:53.258377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.765 [2024-11-06 14:32:53.258406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.765 [2024-11-06 14:32:53.261122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.765 [2024-11-06 14:32:53.261262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.765 [2024-11-06 14:32:53.261291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.765 [2024-11-06 14:32:53.263925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.765 [2024-11-06 14:32:53.264078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.765 [2024-11-06 14:32:53.264107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.765 [2024-11-06 14:32:53.266768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.765 [2024-11-06 14:32:53.266910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.765 [2024-11-06 14:32:53.266939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.765 [2024-11-06 14:32:53.269587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.765 [2024-11-06 14:32:53.269725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.765 [2024-11-06 14:32:53.269755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.765 [2024-11-06 14:32:53.272359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.765 [2024-11-06 14:32:53.272507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.765 [2024-11-06 14:32:53.272536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.765 [2024-11-06 14:32:53.275142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.765 [2024-11-06 14:32:53.275279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.765 [2024-11-06 14:32:53.275308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.765 [2024-11-06 14:32:53.277905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.765 [2024-11-06 14:32:53.278054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.765 [2024-11-06 14:32:53.278082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.765 [2024-11-06 14:32:53.280702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.765 [2024-11-06 14:32:53.280851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.765 [2024-11-06 14:32:53.280880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.765 [2024-11-06 14:32:53.283478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.765 [2024-11-06 14:32:53.283616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.765 [2024-11-06 14:32:53.283645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.765 [2024-11-06 14:32:53.286258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.765 [2024-11-06 14:32:53.286394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.765 [2024-11-06 14:32:53.286423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.765 [2024-11-06 14:32:53.289014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.828 [2024-11-06 14:32:53.289160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.828 [2024-11-06 14:32:53.289188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.828 [2024-11-06 14:32:53.291766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.828 [2024-11-06 14:32:53.291912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.828 [2024-11-06 14:32:53.291941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.828 [2024-11-06 14:32:53.294575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.828 [2024-11-06 14:32:53.294717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.828 [2024-11-06 14:32:53.294745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.828 [2024-11-06 14:32:53.297346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.829 [2024-11-06 14:32:53.297484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.829 [2024-11-06 14:32:53.297514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.829 [2024-11-06 14:32:53.300119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.829 [2024-11-06 14:32:53.300257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.829 [2024-11-06 14:32:53.300293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.829 [2024-11-06 14:32:53.302880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.829 [2024-11-06 14:32:53.303033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.829 [2024-11-06 14:32:53.303063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.829 [2024-11-06 14:32:53.305682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.829 [2024-11-06 14:32:53.305822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.829 [2024-11-06 14:32:53.305851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.829 [2024-11-06 14:32:53.308477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.829 [2024-11-06 14:32:53.308611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.829 [2024-11-06 14:32:53.308641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.829 [2024-11-06 14:32:53.311257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.829 [2024-11-06 14:32:53.311407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.829 [2024-11-06 14:32:53.311437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.829 [2024-11-06 14:32:53.314045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.829 [2024-11-06 14:32:53.314190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.829 [2024-11-06 14:32:53.314218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.829 [2024-11-06 14:32:53.316807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.829 [2024-11-06 14:32:53.316963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.829 [2024-11-06 14:32:53.316991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.829 [2024-11-06 14:32:53.319600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.829 [2024-11-06 14:32:53.319749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.829 [2024-11-06 14:32:53.319777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.829 [2024-11-06 14:32:53.322385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.829 [2024-11-06 14:32:53.322526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.829 [2024-11-06 14:32:53.322563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.829 [2024-11-06 14:32:53.325145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.829 [2024-11-06 14:32:53.325281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.829 [2024-11-06 14:32:53.325309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.829 [2024-11-06 14:32:53.327901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.829 [2024-11-06 14:32:53.328052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.829 [2024-11-06 14:32:53.328077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.829 [2024-11-06 14:32:53.330656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.829 [2024-11-06 14:32:53.330800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.829 [2024-11-06 14:32:53.330828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.829 [2024-11-06 14:32:53.333431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.829 [2024-11-06 14:32:53.333575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.829 [2024-11-06 14:32:53.333604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.829 [2024-11-06 14:32:53.336218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.829 [2024-11-06 14:32:53.336362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.829 [2024-11-06 14:32:53.336391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.829 [2024-11-06 14:32:53.338981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.829 [2024-11-06 14:32:53.339118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.829 [2024-11-06 14:32:53.339170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.829 [2024-11-06 14:32:53.341754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.829 [2024-11-06 14:32:53.341893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.829 [2024-11-06 14:32:53.341922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.829 [2024-11-06 14:32:53.344545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.829 [2024-11-06 14:32:53.344686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.829 [2024-11-06 14:32:53.344717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.829 [2024-11-06 14:32:53.347318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.829 [2024-11-06 14:32:53.347465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.829 [2024-11-06 14:32:53.347498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.829 [2024-11-06 14:32:53.350109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.829 [2024-11-06 14:32:53.350256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.829 [2024-11-06 14:32:53.350281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.829 [2024-11-06 14:32:53.352882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.829 [2024-11-06 14:32:53.353039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.829 [2024-11-06 14:32:53.353068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.829 [2024-11-06 14:32:53.355692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.829 [2024-11-06 14:32:53.355832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.829 [2024-11-06 14:32:53.355862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.829 [2024-11-06 14:32:53.358438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.829 [2024-11-06 14:32:53.358585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.829 [2024-11-06 14:32:53.358614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.829 [2024-11-06 14:32:53.361170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.829 [2024-11-06 14:32:53.361309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.829 [2024-11-06 14:32:53.361338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.829 [2024-11-06 14:32:53.363895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.829 [2024-11-06 14:32:53.364043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.829 [2024-11-06 14:32:53.364067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.829 [2024-11-06 14:32:53.366694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.829 [2024-11-06 14:32:53.366832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.829 [2024-11-06 14:32:53.366861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.830 [2024-11-06 14:32:53.369494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.830 [2024-11-06 14:32:53.369634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.830 [2024-11-06 14:32:53.369664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.830 [2024-11-06 14:32:53.372257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.830 [2024-11-06 14:32:53.372401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.830 [2024-11-06 14:32:53.372429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.830 [2024-11-06 14:32:53.375016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.830 [2024-11-06 14:32:53.375154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.830 [2024-11-06 14:32:53.375183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.830 [2024-11-06 14:32:53.377780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.830 [2024-11-06 14:32:53.377929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.830 [2024-11-06 14:32:53.377967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.830 [2024-11-06 14:32:53.380557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.830 [2024-11-06 14:32:53.380695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.830 [2024-11-06 14:32:53.380724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.830 [2024-11-06 14:32:53.383324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.830 [2024-11-06 14:32:53.383461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.830 [2024-11-06 14:32:53.383491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.830 [2024-11-06 14:32:53.386047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.830 [2024-11-06 14:32:53.386186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.830 [2024-11-06 14:32:53.386215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.830 [2024-11-06 14:32:53.388828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.830 [2024-11-06 14:32:53.388987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.830 [2024-11-06 14:32:53.389017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.830 [2024-11-06 14:32:53.391613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.830 [2024-11-06 14:32:53.391752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.830 [2024-11-06 14:32:53.391781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.830 [2024-11-06 14:32:53.394382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:25.830 [2024-11-06 14:32:53.394522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.830 [2024-11-06 14:32:53.394551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.092 [2024-11-06 14:32:53.397151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.092 [2024-11-06 14:32:53.397300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.092 [2024-11-06 14:32:53.397334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.092 [2024-11-06 14:32:53.399895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.092 [2024-11-06 14:32:53.400046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.092 [2024-11-06 14:32:53.400073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.092 [2024-11-06 14:32:53.402681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.092 [2024-11-06 14:32:53.402821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.092 [2024-11-06 14:32:53.402849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.092 [2024-11-06 14:32:53.405459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.092 [2024-11-06 14:32:53.405596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.092 [2024-11-06 14:32:53.405625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.092 [2024-11-06 14:32:53.408224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.092 [2024-11-06 14:32:53.408369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.092 [2024-11-06 14:32:53.408406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.092 [2024-11-06 14:32:53.410990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.092 [2024-11-06 14:32:53.411127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.092 [2024-11-06 14:32:53.411155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.092 [2024-11-06 14:32:53.413767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.092 [2024-11-06 14:32:53.413914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.092 [2024-11-06 14:32:53.413944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.092 [2024-11-06 14:32:53.416537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.092 [2024-11-06 14:32:53.416684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.092 [2024-11-06 14:32:53.416713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.092 [2024-11-06 14:32:53.419300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.092 [2024-11-06 14:32:53.419439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.092 [2024-11-06 14:32:53.419468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.092 [2024-11-06 14:32:53.422118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.092 [2024-11-06 14:32:53.422256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.092 [2024-11-06 14:32:53.422285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.092 [2024-11-06 14:32:53.424913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.092 [2024-11-06 14:32:53.425074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.092 [2024-11-06 14:32:53.425103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.092 [2024-11-06 14:32:53.427686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.092 [2024-11-06 14:32:53.427837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.092 [2024-11-06 14:32:53.427866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.092 [2024-11-06 14:32:53.430428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.092 [2024-11-06 14:32:53.430570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.092 [2024-11-06 14:32:53.430599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.092 [2024-11-06 14:32:53.433190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.092 [2024-11-06 14:32:53.433327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.092 [2024-11-06 14:32:53.433357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.092 [2024-11-06 14:32:53.435933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.092 [2024-11-06 14:32:53.436080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.092 [2024-11-06 14:32:53.436108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.092 [2024-11-06 14:32:53.438696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.092 [2024-11-06 14:32:53.438835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.092 [2024-11-06 14:32:53.438863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.092 [2024-11-06 14:32:53.441477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.092 [2024-11-06 14:32:53.441615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.092 [2024-11-06 14:32:53.441642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.092 [2024-11-06 14:32:53.444208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.092 [2024-11-06 14:32:53.444355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.092 [2024-11-06 14:32:53.444380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.092 [2024-11-06 14:32:53.446962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.092 [2024-11-06 14:32:53.447100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.092 [2024-11-06 14:32:53.447128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.092 [2024-11-06 14:32:53.449722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.092 [2024-11-06 14:32:53.449861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.092 [2024-11-06 14:32:53.449891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.092 [2024-11-06 14:32:53.452502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.092 [2024-11-06 14:32:53.452640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.092 [2024-11-06 14:32:53.452669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.092 [2024-11-06 14:32:53.455278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.092 [2024-11-06 14:32:53.455417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.092 [2024-11-06 14:32:53.455448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.092 [2024-11-06 14:32:53.458059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.092 [2024-11-06 14:32:53.458198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.092 [2024-11-06 14:32:53.458221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.092 [2024-11-06 14:32:53.460853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.092 [2024-11-06 14:32:53.461002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.092 [2024-11-06 14:32:53.461025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.092 [2024-11-06 14:32:53.463563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.092 [2024-11-06 14:32:53.463702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.092 [2024-11-06 14:32:53.463730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.092 [2024-11-06 14:32:53.466326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.092 [2024-11-06 14:32:53.466474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.093 [2024-11-06 14:32:53.466503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.093 [2024-11-06 14:32:53.469093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.093 [2024-11-06 14:32:53.469230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.093 [2024-11-06 14:32:53.469259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.093 [2024-11-06 14:32:53.471842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.093 [2024-11-06 14:32:53.472000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.093 [2024-11-06 14:32:53.472027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.093 [2024-11-06 14:32:53.474633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.093 [2024-11-06 14:32:53.474783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.093 [2024-11-06 14:32:53.474812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.093 [2024-11-06 14:32:53.477413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.093 [2024-11-06 14:32:53.477550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.093 [2024-11-06 14:32:53.477579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.093 [2024-11-06 14:32:53.480172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.093 [2024-11-06 14:32:53.480329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.093 [2024-11-06 14:32:53.480358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.093 [2024-11-06 14:32:53.482924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.093 [2024-11-06 14:32:53.483072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.093 [2024-11-06 14:32:53.483100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.093 [2024-11-06 14:32:53.485727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.093 [2024-11-06 14:32:53.485866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.093 [2024-11-06 14:32:53.485895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.093 [2024-11-06 14:32:53.488538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.093 [2024-11-06 14:32:53.488675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.093 [2024-11-06 14:32:53.488703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.093 [2024-11-06 14:32:53.491301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.093 [2024-11-06 14:32:53.491440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.093 [2024-11-06 14:32:53.491468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.093 [2024-11-06 14:32:53.494081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.093 [2024-11-06 14:32:53.494219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.093 [2024-11-06 14:32:53.494247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.093 [2024-11-06 14:32:53.496860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.093 [2024-11-06 14:32:53.497008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.093 [2024-11-06 14:32:53.497036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.093 [2024-11-06 14:32:53.499585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.093 [2024-11-06 14:32:53.499724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.093 [2024-11-06 14:32:53.499761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.093 [2024-11-06 14:32:53.502362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.093 [2024-11-06 14:32:53.502500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.093 [2024-11-06 14:32:53.502529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.093 [2024-11-06 14:32:53.505171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.093 [2024-11-06 14:32:53.505309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.093 [2024-11-06 14:32:53.505337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.093 [2024-11-06 14:32:53.507913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.093 [2024-11-06 14:32:53.508059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.093 [2024-11-06 14:32:53.508087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.093 [2024-11-06 14:32:53.510674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.093 [2024-11-06 14:32:53.510812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.093 [2024-11-06 14:32:53.510841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.093 [2024-11-06 14:32:53.513440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.093 [2024-11-06 14:32:53.513576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.093 [2024-11-06 14:32:53.513605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.093 [2024-11-06 14:32:53.516184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.093 [2024-11-06 14:32:53.516343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.093 [2024-11-06 14:32:53.516371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.093 [2024-11-06 14:32:53.518985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.093 [2024-11-06 14:32:53.519132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.093 [2024-11-06 14:32:53.519159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.093 [2024-11-06 14:32:53.521728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.093 [2024-11-06 14:32:53.521874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.093 [2024-11-06 14:32:53.521902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.093 [2024-11-06 14:32:53.524487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.093 [2024-11-06 14:32:53.524632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.093 [2024-11-06 14:32:53.524661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.093 [2024-11-06 14:32:53.527248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.093 [2024-11-06 14:32:53.527387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.093 [2024-11-06 14:32:53.527416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.093 [2024-11-06 14:32:53.529999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.093 [2024-11-06 14:32:53.530138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.093 [2024-11-06 14:32:53.530166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.093 [2024-11-06 14:32:53.532780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.093 [2024-11-06 14:32:53.532917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.093 [2024-11-06 14:32:53.532957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.093 [2024-11-06 14:32:53.535517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.093 [2024-11-06 14:32:53.535668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.093 [2024-11-06 14:32:53.535697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.093 [2024-11-06 14:32:53.538283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.093 [2024-11-06 14:32:53.538422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.093 [2024-11-06 14:32:53.538451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.093 [2024-11-06 14:32:53.541057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.094 [2024-11-06 14:32:53.541203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.094 [2024-11-06 14:32:53.541231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.094 [2024-11-06 14:32:53.543825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.094 [2024-11-06 14:32:53.543974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.094 [2024-11-06 14:32:53.544001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.094 [2024-11-06 14:32:53.546605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.094 [2024-11-06 14:32:53.546745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.094 [2024-11-06 14:32:53.546773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.094 [2024-11-06 14:32:53.549357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.094 [2024-11-06 14:32:53.549500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.094 [2024-11-06 14:32:53.549525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.094 [2024-11-06 14:32:53.552099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.094 [2024-11-06 14:32:53.552238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.094 [2024-11-06 14:32:53.552272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.094 [2024-11-06 14:32:53.554876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.094 [2024-11-06 14:32:53.555026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.094 [2024-11-06 14:32:53.555053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.094 [2024-11-06 14:32:53.557639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.094 [2024-11-06 14:32:53.557777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.094 [2024-11-06 14:32:53.557806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.094 [2024-11-06 14:32:53.560409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.094 [2024-11-06 14:32:53.560545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.094 [2024-11-06 14:32:53.560573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.094 [2024-11-06 14:32:53.563134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.094 [2024-11-06 14:32:53.563273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.094 [2024-11-06 14:32:53.563302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.094 [2024-11-06 14:32:53.565880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.094 [2024-11-06 14:32:53.566029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.094 [2024-11-06 14:32:53.566057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.094 [2024-11-06 14:32:53.568627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.094 [2024-11-06 14:32:53.568766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.094 [2024-11-06 14:32:53.568795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.094 [2024-11-06 14:32:53.571342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.094 [2024-11-06 14:32:53.571486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.094 [2024-11-06 14:32:53.571513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.094 [2024-11-06 14:32:53.574133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.094 [2024-11-06 14:32:53.574270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.094 [2024-11-06 14:32:53.574298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.094 [2024-11-06 14:32:53.576874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.094 [2024-11-06 14:32:53.577023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.094 [2024-11-06 14:32:53.577051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.094 [2024-11-06 14:32:53.579607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.094 [2024-11-06 14:32:53.579746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.094 [2024-11-06 14:32:53.579774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.094 [2024-11-06 14:32:53.582400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.094 [2024-11-06 14:32:53.582547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.094 [2024-11-06 14:32:53.582575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.094 [2024-11-06 14:32:53.585211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.094 [2024-11-06 14:32:53.585352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.094 [2024-11-06 14:32:53.585381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.094 [2024-11-06 14:32:53.587974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.094 [2024-11-06 14:32:53.588109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.094 [2024-11-06 14:32:53.588137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.094 [2024-11-06 14:32:53.590726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.094 [2024-11-06 14:32:53.590872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.094 [2024-11-06 14:32:53.590902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.094 [2024-11-06 14:32:53.593519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.094 [2024-11-06 14:32:53.593658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.094 [2024-11-06 14:32:53.593687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.094 [2024-11-06 14:32:53.596253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.094 [2024-11-06 14:32:53.596407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.094 [2024-11-06 14:32:53.596435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.094 [2024-11-06 14:32:53.598985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.094 [2024-11-06 14:32:53.599129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.094 [2024-11-06 14:32:53.599157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.094 [2024-11-06 14:32:53.601770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.094 [2024-11-06 14:32:53.601908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.094 [2024-11-06 14:32:53.601937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.094 [2024-11-06 14:32:53.604554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.094 [2024-11-06 14:32:53.604694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.094 [2024-11-06 14:32:53.604722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.094 [2024-11-06 14:32:53.607328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.094 [2024-11-06 14:32:53.607466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.094 [2024-11-06 14:32:53.607497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.094 [2024-11-06 14:32:53.610078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.094 [2024-11-06 14:32:53.610215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.094 [2024-11-06 14:32:53.610237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.094 [2024-11-06 14:32:53.612801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.095 [2024-11-06 14:32:53.612941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.095 [2024-11-06 14:32:53.612975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.095 [2024-11-06 14:32:53.615578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.095 [2024-11-06 14:32:53.615718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.095 [2024-11-06 14:32:53.615744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.095 [2024-11-06 14:32:53.618388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.095 [2024-11-06 14:32:53.618527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.095 [2024-11-06 14:32:53.618544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.095 [2024-11-06 14:32:53.621125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.095 [2024-11-06 14:32:53.621266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.095 [2024-11-06 14:32:53.621290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.095 [2024-11-06 14:32:53.623879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.095 [2024-11-06 14:32:53.624037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.095 [2024-11-06 14:32:53.624060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.095 [2024-11-06 14:32:53.626661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.095 [2024-11-06 14:32:53.626818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.095 [2024-11-06 14:32:53.626856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.095 [2024-11-06 14:32:53.629411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.095 [2024-11-06 14:32:53.629550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.095 [2024-11-06 14:32:53.629574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.095 [2024-11-06 14:32:53.632198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.095 [2024-11-06 14:32:53.632369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.095 [2024-11-06 14:32:53.632387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.095 [2024-11-06 14:32:53.634938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.095 [2024-11-06 14:32:53.635085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.095 [2024-11-06 14:32:53.635113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.095 [2024-11-06 14:32:53.637719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.095 [2024-11-06 14:32:53.637856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.095 [2024-11-06 14:32:53.637880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.095 [2024-11-06 14:32:53.640505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.095 [2024-11-06 14:32:53.640642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.095 [2024-11-06 14:32:53.640666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.095 [2024-11-06 14:32:53.643274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.095 [2024-11-06 14:32:53.643417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.095 [2024-11-06 14:32:53.643441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.095 [2024-11-06 14:32:53.646016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.095 [2024-11-06 14:32:53.646160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.095 [2024-11-06 14:32:53.646183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.095 [2024-11-06 14:32:53.648766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.095 [2024-11-06 14:32:53.648905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.095 [2024-11-06 14:32:53.648934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.095 [2024-11-06 14:32:53.651482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.095 [2024-11-06 14:32:53.651627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.095 [2024-11-06 14:32:53.651651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.095 [2024-11-06 14:32:53.654259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.095 [2024-11-06 14:32:53.654398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.095 [2024-11-06 14:32:53.654422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.095 [2024-11-06 14:32:53.657018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.095 [2024-11-06 14:32:53.657153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.095 [2024-11-06 14:32:53.657176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.095 [2024-11-06 14:32:53.659795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.095 [2024-11-06 14:32:53.659939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.095 [2024-11-06 14:32:53.659972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.095 [2024-11-06 14:32:53.662581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.095 [2024-11-06 14:32:53.662720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.095 [2024-11-06 14:32:53.662737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.095 [2024-11-06 14:32:53.665348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.095 [2024-11-06 14:32:53.665488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.095 [2024-11-06 14:32:53.665512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.095 [2024-11-06 14:32:53.668137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.095 [2024-11-06 14:32:53.668287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.095 [2024-11-06 14:32:53.668306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.095 [2024-11-06 14:32:53.670897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.095 [2024-11-06 14:32:53.671048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.095 [2024-11-06 14:32:53.671066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.095 [2024-11-06 14:32:53.673668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.095 [2024-11-06 14:32:53.673807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.095 [2024-11-06 14:32:53.673832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.095 [2024-11-06 14:32:53.676446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.095 [2024-11-06 14:32:53.676584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.095 [2024-11-06 14:32:53.676645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.095 [2024-11-06 14:32:53.679213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.095 [2024-11-06 14:32:53.679353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.095 [2024-11-06 14:32:53.679377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.095 [2024-11-06 14:32:53.681967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.095 [2024-11-06 14:32:53.682109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.095 [2024-11-06 14:32:53.682133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.095 [2024-11-06 14:32:53.684748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.095 [2024-11-06 14:32:53.684887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.095 [2024-11-06 14:32:53.684916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.096 [2024-11-06 14:32:53.687510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.096 [2024-11-06 14:32:53.687647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.096 [2024-11-06 14:32:53.687671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.096 [2024-11-06 14:32:53.690289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.096 [2024-11-06 14:32:53.690428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.096 [2024-11-06 14:32:53.690453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.096 [2024-11-06 14:32:53.693092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.096 [2024-11-06 14:32:53.693229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.096 [2024-11-06 14:32:53.693255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.096 [2024-11-06 14:32:53.695851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.096 [2024-11-06 14:32:53.696005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.096 [2024-11-06 14:32:53.696023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.096 [2024-11-06 14:32:53.698599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.096 [2024-11-06 14:32:53.698742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.096 [2024-11-06 14:32:53.698765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.096 [2024-11-06 14:32:53.701380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.096 [2024-11-06 14:32:53.701527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.096 [2024-11-06 14:32:53.701555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.096 [2024-11-06 14:32:53.704131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.096 [2024-11-06 14:32:53.704291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.096 [2024-11-06 14:32:53.704329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.096 [2024-11-06 14:32:53.706871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.096 [2024-11-06 14:32:53.707019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.096 [2024-11-06 14:32:53.707036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.096 [2024-11-06 14:32:53.709644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.096 [2024-11-06 14:32:53.709784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.096 [2024-11-06 14:32:53.709809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.096 [2024-11-06 14:32:53.712459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.096 [2024-11-06 14:32:53.712598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.096 [2024-11-06 14:32:53.712625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.096 [2024-11-06 14:32:53.715249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.096 [2024-11-06 14:32:53.715393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.096 [2024-11-06 14:32:53.715445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.096 [2024-11-06 14:32:53.718010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.096 [2024-11-06 14:32:53.718145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.096 [2024-11-06 14:32:53.718162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.096 [2024-11-06 14:32:53.720782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.096 [2024-11-06 14:32:53.720925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.096 [2024-11-06 14:32:53.720941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.357 [2024-11-06 14:32:53.723520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.357 [2024-11-06 14:32:53.723665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.357 [2024-11-06 14:32:53.723689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.357 [2024-11-06 14:32:53.726317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.357 [2024-11-06 14:32:53.726463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.357 [2024-11-06 14:32:53.726485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.357 [2024-11-06 14:32:53.729122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.357 [2024-11-06 14:32:53.729271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.357 [2024-11-06 14:32:53.729293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.357 [2024-11-06 14:32:53.731858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.357 [2024-11-06 14:32:53.732018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.357 [2024-11-06 14:32:53.732042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.357 [2024-11-06 14:32:53.734630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.357 [2024-11-06 14:32:53.734775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.357 [2024-11-06 14:32:53.734799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.357 [2024-11-06 14:32:53.737459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.357 [2024-11-06 14:32:53.737597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.357 [2024-11-06 14:32:53.737620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.357 [2024-11-06 14:32:53.740206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.357 [2024-11-06 14:32:53.740363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.357 [2024-11-06 14:32:53.740386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.357 [2024-11-06 14:32:53.742968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.357 [2024-11-06 14:32:53.743117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.357 [2024-11-06 14:32:53.743144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.357 [2024-11-06 14:32:53.745754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.357 [2024-11-06 14:32:53.745891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.357 [2024-11-06 14:32:53.745915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.357 [2024-11-06 14:32:53.748519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.357 [2024-11-06 14:32:53.748664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.357 [2024-11-06 14:32:53.748681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.357 [2024-11-06 14:32:53.751269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.357 [2024-11-06 14:32:53.751425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.357 [2024-11-06 14:32:53.751449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.357 [2024-11-06 14:32:53.754030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.357 [2024-11-06 14:32:53.754168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.357 [2024-11-06 14:32:53.754189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.357 [2024-11-06 14:32:53.756784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.357 [2024-11-06 14:32:53.756932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.357 [2024-11-06 14:32:53.756965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.357 [2024-11-06 14:32:53.759543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.357 [2024-11-06 14:32:53.759685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.357 [2024-11-06 14:32:53.759707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.357 [2024-11-06 14:32:53.762301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.357 [2024-11-06 14:32:53.762445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.357 [2024-11-06 14:32:53.762469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.357 [2024-11-06 14:32:53.765063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.357 [2024-11-06 14:32:53.765201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.357 [2024-11-06 14:32:53.765224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.357 [2024-11-06 14:32:53.767758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.357 [2024-11-06 14:32:53.767897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.357 [2024-11-06 14:32:53.767915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.357 [2024-11-06 14:32:53.770474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.357 [2024-11-06 14:32:53.770614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.357 [2024-11-06 14:32:53.770652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.358 [2024-11-06 14:32:53.773265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.358 [2024-11-06 14:32:53.773413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.358 [2024-11-06 14:32:53.773456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.358 [2024-11-06 14:32:53.776037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.358 [2024-11-06 14:32:53.776185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.358 [2024-11-06 14:32:53.776209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.358 [2024-11-06 14:32:53.778807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.358 [2024-11-06 14:32:53.778962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.358 [2024-11-06 14:32:53.778988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.358 [2024-11-06 14:32:53.781568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.358 [2024-11-06 14:32:53.781717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.358 [2024-11-06 14:32:53.781741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.358 [2024-11-06 14:32:53.784366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.358 [2024-11-06 14:32:53.784512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.358 [2024-11-06 14:32:53.784535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.358 [2024-11-06 14:32:53.787140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.358 [2024-11-06 14:32:53.787276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.358 [2024-11-06 14:32:53.787301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.358 [2024-11-06 14:32:53.789915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.358 [2024-11-06 14:32:53.790063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.358 [2024-11-06 14:32:53.790085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.358 [2024-11-06 14:32:53.792645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.358 [2024-11-06 14:32:53.792784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.358 [2024-11-06 14:32:53.792808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.358 [2024-11-06 14:32:53.795414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.358 [2024-11-06 14:32:53.795551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.358 [2024-11-06 14:32:53.795569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.358 [2024-11-06 14:32:53.798186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.358 [2024-11-06 14:32:53.798323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.358 [2024-11-06 14:32:53.798368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.358 [2024-11-06 14:32:53.800926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.358 [2024-11-06 14:32:53.801075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.358 [2024-11-06 14:32:53.801092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.358 [2024-11-06 14:32:53.803685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.358 [2024-11-06 14:32:53.803824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.358 [2024-11-06 14:32:53.803848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.358 [2024-11-06 14:32:53.806452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.358 [2024-11-06 14:32:53.806589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.358 [2024-11-06 14:32:53.806614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.358 [2024-11-06 14:32:53.809240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.358 [2024-11-06 14:32:53.809381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.358 [2024-11-06 14:32:53.809405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.358 [2024-11-06 14:32:53.811980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.358 [2024-11-06 14:32:53.812119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.358 [2024-11-06 14:32:53.812143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.358 [2024-11-06 14:32:53.814746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.358 [2024-11-06 14:32:53.814883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.358 [2024-11-06 14:32:53.814908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.358 [2024-11-06 14:32:53.817523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.358 [2024-11-06 14:32:53.817661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.358 [2024-11-06 14:32:53.817686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.358 [2024-11-06 14:32:53.820308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.358 [2024-11-06 14:32:53.820457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.358 [2024-11-06 14:32:53.820486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.358 11009.00 IOPS, 1376.12 MiB/s [2024-11-06T14:32:53.993Z] [2024-11-06 14:32:53.823997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.358 [2024-11-06 14:32:53.824079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.358 [2024-11-06 14:32:53.824096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.358 [2024-11-06 14:32:53.826780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.358 [2024-11-06 14:32:53.826875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.358 [2024-11-06 14:32:53.826891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.358 [2024-11-06 14:32:53.829584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.358 [2024-11-06 14:32:53.829674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.358 [2024-11-06 14:32:53.829690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.358 [2024-11-06 14:32:53.832344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.358 [2024-11-06 14:32:53.832435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.358 [2024-11-06 14:32:53.832452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.358 [2024-11-06 14:32:53.835116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.358 [2024-11-06 14:32:53.835205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.358 [2024-11-06 14:32:53.835221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.358 [2024-11-06 14:32:53.837885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.358 [2024-11-06 14:32:53.837994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.358 [2024-11-06 14:32:53.838010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.358 [2024-11-06 14:32:53.840658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.358 [2024-11-06 14:32:53.840747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.358 [2024-11-06 14:32:53.840764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.358 [2024-11-06 14:32:53.843407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.358 [2024-11-06 14:32:53.843490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.358 [2024-11-06 14:32:53.843507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.358 [2024-11-06 14:32:53.846183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.358 [2024-11-06 14:32:53.846274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.358 [2024-11-06 14:32:53.846291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.359 [2024-11-06 14:32:53.848975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.359 [2024-11-06 14:32:53.849063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.359 [2024-11-06 14:32:53.849079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.359 [2024-11-06 14:32:53.851737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.359 [2024-11-06 14:32:53.851820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.359 [2024-11-06 14:32:53.851836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.359 [2024-11-06 14:32:53.854517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.359 [2024-11-06 14:32:53.854599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.359 [2024-11-06 14:32:53.854623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.359 [2024-11-06 14:32:53.857308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.359 [2024-11-06 14:32:53.857393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.359 [2024-11-06 14:32:53.857409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.359 [2024-11-06 14:32:53.860105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.359 [2024-11-06 14:32:53.860201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.359 [2024-11-06 14:32:53.860217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.359 [2024-11-06 14:32:53.862861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.359 [2024-11-06 14:32:53.862944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.359 [2024-11-06 14:32:53.862971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.359 [2024-11-06 14:32:53.865638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.359 [2024-11-06 14:32:53.865720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.359 [2024-11-06 14:32:53.865736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.359 [2024-11-06 14:32:53.868398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.359 [2024-11-06 14:32:53.868481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.359 [2024-11-06 14:32:53.868497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.359 [2024-11-06 14:32:53.871177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.359 [2024-11-06 14:32:53.871265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.359 [2024-11-06 14:32:53.871282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.359 [2024-11-06 14:32:53.873928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.359 [2024-11-06 14:32:53.874031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.359 [2024-11-06 14:32:53.874047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.359 [2024-11-06 14:32:53.876725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.359 [2024-11-06 14:32:53.876816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.359 [2024-11-06 14:32:53.876833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.359 [2024-11-06 14:32:53.879470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.359 [2024-11-06 14:32:53.879553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.359 [2024-11-06 14:32:53.879569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.359 [2024-11-06 14:32:53.882224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.359 [2024-11-06 14:32:53.882313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.359 [2024-11-06 14:32:53.882329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.359 [2024-11-06 14:32:53.884959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.359 [2024-11-06 14:32:53.885048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.359 [2024-11-06 14:32:53.885064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.359 [2024-11-06 14:32:53.887666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.359 [2024-11-06 14:32:53.887756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.359 [2024-11-06 14:32:53.887772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.359 [2024-11-06 14:32:53.890445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.359 [2024-11-06 14:32:53.890541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.359 [2024-11-06 14:32:53.890557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.359 [2024-11-06 14:32:53.893207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.359 [2024-11-06 14:32:53.893303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.359 [2024-11-06 14:32:53.893319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.359 [2024-11-06 14:32:53.895984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.359 [2024-11-06 14:32:53.896068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.359 [2024-11-06 14:32:53.896084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.359 [2024-11-06 14:32:53.898722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.359 [2024-11-06 14:32:53.898808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.359 [2024-11-06 14:32:53.898824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.359 [2024-11-06 14:32:53.901455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.359 [2024-11-06 14:32:53.901546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.359 [2024-11-06 14:32:53.901562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.359 [2024-11-06 14:32:53.904205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.359 [2024-11-06 14:32:53.904301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.359 [2024-11-06 14:32:53.904317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.359 [2024-11-06 14:32:53.906975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.359 [2024-11-06 14:32:53.907065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.359 [2024-11-06 14:32:53.907082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.359 [2024-11-06 14:32:53.909725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.359 [2024-11-06 14:32:53.909818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.359 [2024-11-06 14:32:53.909834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.359 [2024-11-06 14:32:53.912533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.359 [2024-11-06 14:32:53.912617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.359 [2024-11-06 14:32:53.912633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.359 [2024-11-06 14:32:53.915277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.359 [2024-11-06 14:32:53.915364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.359 [2024-11-06 14:32:53.915380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.359 [2024-11-06 14:32:53.918029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.359 [2024-11-06 14:32:53.918113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.359 [2024-11-06 14:32:53.918130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.359 [2024-11-06 14:32:53.920763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.359 [2024-11-06 14:32:53.920855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.360 [2024-11-06 14:32:53.920871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.360 [2024-11-06 14:32:53.923527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.360 [2024-11-06 14:32:53.923644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.360 [2024-11-06 14:32:53.923661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.360 [2024-11-06 14:32:53.926278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.360 [2024-11-06 14:32:53.926373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.360 [2024-11-06 14:32:53.926389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.360 [2024-11-06 14:32:53.929079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.360 [2024-11-06 14:32:53.929167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.360 [2024-11-06 14:32:53.929183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.360 [2024-11-06 14:32:53.931804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.360 [2024-11-06 14:32:53.931893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.360 [2024-11-06 14:32:53.931909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.360 [2024-11-06 14:32:53.934555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.360 [2024-11-06 14:32:53.934641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.360 [2024-11-06 14:32:53.934657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.360 [2024-11-06 14:32:53.937317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.360 [2024-11-06 14:32:53.937407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.360 [2024-11-06 14:32:53.937423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.360 [2024-11-06 14:32:53.940059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.360 [2024-11-06 14:32:53.940154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.360 [2024-11-06 14:32:53.940170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.360 [2024-11-06 14:32:53.942821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.360 [2024-11-06 14:32:53.942917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.360 [2024-11-06 14:32:53.942933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.360 [2024-11-06 14:32:53.945587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.360 [2024-11-06 14:32:53.945684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.360 [2024-11-06 14:32:53.945701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.360 [2024-11-06 14:32:53.948368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.360 [2024-11-06 14:32:53.948456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.360 [2024-11-06 14:32:53.948472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.360 [2024-11-06 14:32:53.951093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.360 [2024-11-06 14:32:53.951178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.360 [2024-11-06 14:32:53.951194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.360 [2024-11-06 14:32:53.953937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.360 [2024-11-06 14:32:53.954031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.360 [2024-11-06 14:32:53.954047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.360 [2024-11-06 14:32:53.956764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.360 [2024-11-06 14:32:53.956848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.360 [2024-11-06 14:32:53.956864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.360 [2024-11-06 14:32:53.959584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.360 [2024-11-06 14:32:53.959672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.360 [2024-11-06 14:32:53.959688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.360 [2024-11-06 14:32:53.962438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.360 [2024-11-06 14:32:53.962534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.360 [2024-11-06 14:32:53.962551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.360 [2024-11-06 14:32:53.965317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.360 [2024-11-06 14:32:53.965400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.360 [2024-11-06 14:32:53.965418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.360 [2024-11-06 14:32:53.968140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.360 [2024-11-06 14:32:53.968225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.360 [2024-11-06 14:32:53.968242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.360 [2024-11-06 14:32:53.970976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.360 [2024-11-06 14:32:53.971058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.360 [2024-11-06 14:32:53.971075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.360 [2024-11-06 14:32:53.973810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.360 [2024-11-06 14:32:53.973933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.360 [2024-11-06 14:32:53.973959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.360 [2024-11-06 14:32:53.976630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.360 [2024-11-06 14:32:53.976716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.360 [2024-11-06 14:32:53.976733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.360 [2024-11-06 14:32:53.979455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.360 [2024-11-06 14:32:53.979549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.360 [2024-11-06 14:32:53.979566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.360 [2024-11-06 14:32:53.982295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.360 [2024-11-06 14:32:53.982392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.360 [2024-11-06 14:32:53.982408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.360 [2024-11-06 14:32:53.985155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.360 [2024-11-06 14:32:53.985253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.360 [2024-11-06 14:32:53.985269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.360 [2024-11-06 14:32:53.987942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.360 [2024-11-06 14:32:53.988050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.360 [2024-11-06 14:32:53.988067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.621 [2024-11-06 14:32:53.990843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.621 [2024-11-06 14:32:53.990934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.621 [2024-11-06 14:32:53.990961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.621 [2024-11-06 14:32:53.993720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.621 [2024-11-06 14:32:53.993822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.621 [2024-11-06 14:32:53.993839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.621 [2024-11-06 14:32:53.996567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.621 [2024-11-06 14:32:53.996660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.621 [2024-11-06 14:32:53.996677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.621 [2024-11-06 14:32:53.999422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.621 [2024-11-06 14:32:53.999518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.621 [2024-11-06 14:32:53.999535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.621 [2024-11-06 14:32:54.002332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.621 [2024-11-06 14:32:54.002432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.621 [2024-11-06 14:32:54.002449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.621 [2024-11-06 14:32:54.005173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.621 [2024-11-06 14:32:54.005258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.621 [2024-11-06 14:32:54.005275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.621 [2024-11-06 14:32:54.007982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.621 [2024-11-06 14:32:54.008064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.621 [2024-11-06 14:32:54.008080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.621 [2024-11-06 14:32:54.010749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.622 [2024-11-06 14:32:54.010834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.622 [2024-11-06 14:32:54.010850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.622 [2024-11-06 14:32:54.013534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.622 [2024-11-06 14:32:54.013616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.622 [2024-11-06 14:32:54.013633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.622 [2024-11-06 14:32:54.016300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.622 [2024-11-06 14:32:54.016388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.622 [2024-11-06 14:32:54.016405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.622 [2024-11-06 14:32:54.019095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.622 [2024-11-06 14:32:54.019180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.622 [2024-11-06 14:32:54.019196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.622 [2024-11-06 14:32:54.021865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.622 [2024-11-06 14:32:54.021966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.622 [2024-11-06 14:32:54.021983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.622 [2024-11-06 14:32:54.024652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.622 [2024-11-06 14:32:54.024741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.622 [2024-11-06 14:32:54.024758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.622 [2024-11-06 14:32:54.027379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.622 [2024-11-06 14:32:54.027474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.622 [2024-11-06 14:32:54.027490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.622 [2024-11-06 14:32:54.030177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.622 [2024-11-06 14:32:54.030262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.622 [2024-11-06 14:32:54.030278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.622 [2024-11-06 14:32:54.032918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.622 [2024-11-06 14:32:54.033059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.622 [2024-11-06 14:32:54.033091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.622 [2024-11-06 14:32:54.035711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.622 [2024-11-06 14:32:54.035795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.622 [2024-11-06 14:32:54.035811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.622 [2024-11-06 14:32:54.038516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.622 [2024-11-06 14:32:54.038602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.622 [2024-11-06 14:32:54.038619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.622 [2024-11-06 14:32:54.041298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.622 [2024-11-06 14:32:54.041383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.622 [2024-11-06 14:32:54.041399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.622 [2024-11-06 14:32:54.044071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.622 [2024-11-06 14:32:54.044190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.622 [2024-11-06 14:32:54.044207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.622 [2024-11-06 14:32:54.046846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.622 [2024-11-06 14:32:54.046936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.622 [2024-11-06 14:32:54.046962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.622 [2024-11-06 14:32:54.049616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.622 [2024-11-06 14:32:54.049712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.622 [2024-11-06 14:32:54.049728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.622 [2024-11-06 14:32:54.052399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.622 [2024-11-06 14:32:54.052480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.622 [2024-11-06 14:32:54.052497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.622 [2024-11-06 14:32:54.055166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.622 [2024-11-06 14:32:54.055257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.622 [2024-11-06 14:32:54.055274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.622 [2024-11-06 14:32:54.057920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.622 [2024-11-06 14:32:54.058016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.622 [2024-11-06 14:32:54.058032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.622 [2024-11-06 14:32:54.060695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.622 [2024-11-06 14:32:54.060787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.622 [2024-11-06 14:32:54.060803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.622 [2024-11-06 14:32:54.063470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.622 [2024-11-06 14:32:54.063562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.622 [2024-11-06 14:32:54.063578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.622 [2024-11-06 14:32:54.066292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.622 [2024-11-06 14:32:54.066391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.622 [2024-11-06 14:32:54.066407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.622 [2024-11-06 14:32:54.069071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.622 [2024-11-06 14:32:54.069156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.622 [2024-11-06 14:32:54.069172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.622 [2024-11-06 14:32:54.071844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.622 [2024-11-06 14:32:54.071936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.622 [2024-11-06 14:32:54.071963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.622 [2024-11-06 14:32:54.074629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.622 [2024-11-06 14:32:54.074723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.622 [2024-11-06 14:32:54.074739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.622 [2024-11-06 14:32:54.077460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.622 [2024-11-06 14:32:54.077555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.622 [2024-11-06 14:32:54.077571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.622 [2024-11-06 14:32:54.080284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.622 [2024-11-06 14:32:54.080374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.622 [2024-11-06 14:32:54.080391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.622 [2024-11-06 14:32:54.083073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.622 [2024-11-06 14:32:54.083161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.622 [2024-11-06 14:32:54.083177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.622 [2024-11-06 14:32:54.085890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.623 [2024-11-06 14:32:54.085990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.623 [2024-11-06 14:32:54.086012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.623 [2024-11-06 14:32:54.088679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.623 [2024-11-06 14:32:54.088764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.623 [2024-11-06 14:32:54.088812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.623 [2024-11-06 14:32:54.091470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.623 [2024-11-06 14:32:54.091559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.623 [2024-11-06 14:32:54.091575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.623 [2024-11-06 14:32:54.094262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.623 [2024-11-06 14:32:54.094360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.623 [2024-11-06 14:32:54.094377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.623 [2024-11-06 14:32:54.097044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.623 [2024-11-06 14:32:54.097135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.623 [2024-11-06 14:32:54.097160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.623 [2024-11-06 14:32:54.099826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.623 [2024-11-06 14:32:54.099911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.623 [2024-11-06 14:32:54.099929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.623 [2024-11-06 14:32:54.102667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.623 [2024-11-06 14:32:54.102750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.623 [2024-11-06 14:32:54.102767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.623 [2024-11-06 14:32:54.105436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.623 [2024-11-06 14:32:54.105536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.623 [2024-11-06 14:32:54.105560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.623 [2024-11-06 14:32:54.108209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.623 [2024-11-06 14:32:54.108316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.623 [2024-11-06 14:32:54.108334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.623 [2024-11-06 14:32:54.111039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.623 [2024-11-06 14:32:54.111123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.623 [2024-11-06 14:32:54.111140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.623 [2024-11-06 14:32:54.113800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.623 [2024-11-06 14:32:54.113883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.623 [2024-11-06 14:32:54.113899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.623 [2024-11-06 14:32:54.116622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.623 [2024-11-06 14:32:54.116714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.623 [2024-11-06 14:32:54.116730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.623 [2024-11-06 14:32:54.119397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.623 [2024-11-06 14:32:54.119493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.623 [2024-11-06 14:32:54.119518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.623 [2024-11-06 14:32:54.122184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.623 [2024-11-06 14:32:54.122274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.623 [2024-11-06 14:32:54.122296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.623 [2024-11-06 14:32:54.124972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.623 [2024-11-06 14:32:54.125062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.623 [2024-11-06 14:32:54.125086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.623 [2024-11-06 14:32:54.127719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.623 [2024-11-06 14:32:54.127819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.623 [2024-11-06 14:32:54.127842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.623 [2024-11-06 14:32:54.130503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.623 [2024-11-06 14:32:54.130619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.623 [2024-11-06 14:32:54.130650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.623 [2024-11-06 14:32:54.133284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.623 [2024-11-06 14:32:54.133380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.623 [2024-11-06 14:32:54.133404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.623 [2024-11-06 14:32:54.136068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.623 [2024-11-06 14:32:54.136157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.623 [2024-11-06 14:32:54.136173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.623 [2024-11-06 14:32:54.138877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.623 [2024-11-06 14:32:54.138973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.623 [2024-11-06 14:32:54.138990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.623 [2024-11-06 14:32:54.141681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.623 [2024-11-06 14:32:54.141765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.623 [2024-11-06 14:32:54.141787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.623 [2024-11-06 14:32:54.144464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.623 [2024-11-06 14:32:54.144561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.623 [2024-11-06 14:32:54.144585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.623 [2024-11-06 14:32:54.147206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.623 [2024-11-06 14:32:54.147299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.623 [2024-11-06 14:32:54.147315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.623 [2024-11-06 14:32:54.149998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.623 [2024-11-06 14:32:54.150092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.623 [2024-11-06 14:32:54.150114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.623 [2024-11-06 14:32:54.152781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.623 [2024-11-06 14:32:54.152878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.623 [2024-11-06 14:32:54.152902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.623 [2024-11-06 14:32:54.155571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.623 [2024-11-06 14:32:54.155663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.623 [2024-11-06 14:32:54.155689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.623 [2024-11-06 14:32:54.158367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.623 [2024-11-06 14:32:54.158465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.623 [2024-11-06 14:32:54.158489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.623 [2024-11-06 14:32:54.161167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.623 [2024-11-06 14:32:54.161257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.624 [2024-11-06 14:32:54.161273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.624 [2024-11-06 14:32:54.163969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.624 [2024-11-06 14:32:54.164051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.624 [2024-11-06 14:32:54.164068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.624 [2024-11-06 14:32:54.166759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.624 [2024-11-06 14:32:54.166843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.624 [2024-11-06 14:32:54.166859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.624 [2024-11-06 14:32:54.169554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.624 [2024-11-06 14:32:54.169647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.624 [2024-11-06 14:32:54.169663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.624 [2024-11-06 14:32:54.172373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.624 [2024-11-06 14:32:54.172459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.624 [2024-11-06 14:32:54.172477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.624 [2024-11-06 14:32:54.175181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.624 [2024-11-06 14:32:54.175264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.624 [2024-11-06 14:32:54.175280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.624 [2024-11-06 14:32:54.177931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.624 [2024-11-06 14:32:54.178024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.624 [2024-11-06 14:32:54.178040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.624 [2024-11-06 14:32:54.180720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.624 [2024-11-06 14:32:54.180816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.624 [2024-11-06 14:32:54.180843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.624 [2024-11-06 14:32:54.183506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.624 [2024-11-06 14:32:54.183592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.624 [2024-11-06 14:32:54.183609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.624 [2024-11-06 14:32:54.186292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.624 [2024-11-06 14:32:54.186388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.624 [2024-11-06 14:32:54.186413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.624 [2024-11-06 14:32:54.189120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.624 [2024-11-06 14:32:54.189212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.624 [2024-11-06 14:32:54.189228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.624 [2024-11-06 14:32:54.191917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.624 [2024-11-06 14:32:54.192016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.624 [2024-11-06 14:32:54.192032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.624 [2024-11-06 14:32:54.194691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.624 [2024-11-06 14:32:54.194775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.624 [2024-11-06 14:32:54.194791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.624 [2024-11-06 14:32:54.197503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.624 [2024-11-06 14:32:54.197594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.624 [2024-11-06 14:32:54.197619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.624 [2024-11-06 14:32:54.200328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.624 [2024-11-06 14:32:54.200420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.624 [2024-11-06 14:32:54.200437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.624 [2024-11-06 14:32:54.203128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.624 [2024-11-06 14:32:54.203224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.624 [2024-11-06 14:32:54.203240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.624 [2024-11-06 14:32:54.205912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.624 [2024-11-06 14:32:54.206006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.624 [2024-11-06 14:32:54.206022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.624 [2024-11-06 14:32:54.208669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.624 [2024-11-06 14:32:54.208754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.624 [2024-11-06 14:32:54.208770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.624 [2024-11-06 14:32:54.211467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.624 [2024-11-06 14:32:54.211550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.624 [2024-11-06 14:32:54.211584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.624 [2024-11-06 14:32:54.214264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.624 [2024-11-06 14:32:54.214348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.624 [2024-11-06 14:32:54.214370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.624 [2024-11-06 14:32:54.217051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.624 [2024-11-06 14:32:54.217133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.624 [2024-11-06 14:32:54.217161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.624 [2024-11-06 14:32:54.219782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.624 [2024-11-06 14:32:54.219865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.624 [2024-11-06 14:32:54.219881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.624 [2024-11-06 14:32:54.222550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.624 [2024-11-06 14:32:54.222642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.624 [2024-11-06 14:32:54.222668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.624 [2024-11-06 14:32:54.225379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.624 [2024-11-06 14:32:54.225464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.624 [2024-11-06 14:32:54.225480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.624 [2024-11-06 14:32:54.228174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.624 [2024-11-06 14:32:54.228263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.624 [2024-11-06 14:32:54.228288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.624 [2024-11-06 14:32:54.230920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.624 [2024-11-06 14:32:54.231026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.624 [2024-11-06 14:32:54.231050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.624 [2024-11-06 14:32:54.233706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.624 [2024-11-06 14:32:54.233799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.624 [2024-11-06 14:32:54.233823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.624 [2024-11-06 14:32:54.236544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.624 [2024-11-06 14:32:54.236629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.625 [2024-11-06 14:32:54.236647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.625 [2024-11-06 14:32:54.239310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.625 [2024-11-06 14:32:54.239409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.625 [2024-11-06 14:32:54.239433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.625 [2024-11-06 14:32:54.242120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.625 [2024-11-06 14:32:54.242211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.625 [2024-11-06 14:32:54.242229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.625 [2024-11-06 14:32:54.244903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.625 [2024-11-06 14:32:54.244998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.625 [2024-11-06 14:32:54.245015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.625 [2024-11-06 14:32:54.247677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.625 [2024-11-06 14:32:54.247760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.625 [2024-11-06 14:32:54.247777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.625 [2024-11-06 14:32:54.250462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.625 [2024-11-06 14:32:54.250588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.625 [2024-11-06 14:32:54.250613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.625 [2024-11-06 14:32:54.253241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.625 [2024-11-06 14:32:54.253333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.625 [2024-11-06 14:32:54.253349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.886 [2024-11-06 14:32:54.256032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.886 [2024-11-06 14:32:54.256118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.886 [2024-11-06 14:32:54.256135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.886 [2024-11-06 14:32:54.258770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.886 [2024-11-06 14:32:54.258861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.886 [2024-11-06 14:32:54.258888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.886 [2024-11-06 14:32:54.261572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.886 [2024-11-06 14:32:54.261656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.886 [2024-11-06 14:32:54.261678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.886 [2024-11-06 14:32:54.264374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.886 [2024-11-06 14:32:54.264491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.886 [2024-11-06 14:32:54.264519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.886 [2024-11-06 14:32:54.267138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.886 [2024-11-06 14:32:54.267234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.886 [2024-11-06 14:32:54.267250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.886 [2024-11-06 14:32:54.269928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.886 [2024-11-06 14:32:54.270029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.886 [2024-11-06 14:32:54.270045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.886 [2024-11-06 14:32:54.272805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.886 [2024-11-06 14:32:54.272895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.886 [2024-11-06 14:32:54.272911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.886 [2024-11-06 14:32:54.275558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.886 [2024-11-06 14:32:54.275654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.886 [2024-11-06 14:32:54.275670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.886 [2024-11-06 14:32:54.278375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.886 [2024-11-06 14:32:54.278459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.886 [2024-11-06 14:32:54.278475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.886 [2024-11-06 14:32:54.281212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.886 [2024-11-06 14:32:54.281300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.886 [2024-11-06 14:32:54.281347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.886 [2024-11-06 14:32:54.283963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.886 [2024-11-06 14:32:54.284047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.886 [2024-11-06 14:32:54.284064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.886 [2024-11-06 14:32:54.286741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.886 [2024-11-06 14:32:54.286834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.886 [2024-11-06 14:32:54.286860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.886 [2024-11-06 14:32:54.289548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.886 [2024-11-06 14:32:54.289642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.886 [2024-11-06 14:32:54.289666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.886 [2024-11-06 14:32:54.292366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.886 [2024-11-06 14:32:54.292464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.886 [2024-11-06 14:32:54.292505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.886 [2024-11-06 14:32:54.295141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.886 [2024-11-06 14:32:54.295239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.886 [2024-11-06 14:32:54.295257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.886 [2024-11-06 14:32:54.297912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.886 [2024-11-06 14:32:54.298006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.886 [2024-11-06 14:32:54.298023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.886 [2024-11-06 14:32:54.300716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.886 [2024-11-06 14:32:54.300809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.886 [2024-11-06 14:32:54.300825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.886 [2024-11-06 14:32:54.303505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.887 [2024-11-06 14:32:54.303599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.887 [2024-11-06 14:32:54.303615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.887 [2024-11-06 14:32:54.306301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.887 [2024-11-06 14:32:54.306385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.887 [2024-11-06 14:32:54.306401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.887 [2024-11-06 14:32:54.309090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.887 [2024-11-06 14:32:54.309181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.887 [2024-11-06 14:32:54.309197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.887 [2024-11-06 14:32:54.311805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.887 [2024-11-06 14:32:54.311895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.887 [2024-11-06 14:32:54.311911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.887 [2024-11-06 14:32:54.314627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.887 [2024-11-06 14:32:54.314721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.887 [2024-11-06 14:32:54.314745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.887 [2024-11-06 14:32:54.317417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.887 [2024-11-06 14:32:54.317536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.887 [2024-11-06 14:32:54.317564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.887 [2024-11-06 14:32:54.320191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.887 [2024-11-06 14:32:54.320283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.887 [2024-11-06 14:32:54.320299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.887 [2024-11-06 14:32:54.322972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.887 [2024-11-06 14:32:54.323054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.887 [2024-11-06 14:32:54.323072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.887 [2024-11-06 14:32:54.325756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.887 [2024-11-06 14:32:54.325845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.887 [2024-11-06 14:32:54.325862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.887 [2024-11-06 14:32:54.328577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.887 [2024-11-06 14:32:54.328661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.887 [2024-11-06 14:32:54.328678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.887 [2024-11-06 14:32:54.331336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.887 [2024-11-06 14:32:54.331420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.887 [2024-11-06 14:32:54.331436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.887 [2024-11-06 14:32:54.334100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.887 [2024-11-06 14:32:54.334185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.887 [2024-11-06 14:32:54.334201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.887 [2024-11-06 14:32:54.336858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.887 [2024-11-06 14:32:54.336941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.887 [2024-11-06 14:32:54.336969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.887 [2024-11-06 14:32:54.339616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.887 [2024-11-06 14:32:54.339702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.887 [2024-11-06 14:32:54.339719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.887 [2024-11-06 14:32:54.342411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.887 [2024-11-06 14:32:54.342512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.887 [2024-11-06 14:32:54.342539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.887 [2024-11-06 14:32:54.345174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.887 [2024-11-06 14:32:54.345271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.887 [2024-11-06 14:32:54.345295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.887 [2024-11-06 14:32:54.347943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.887 [2024-11-06 14:32:54.348053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.887 [2024-11-06 14:32:54.348076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.887 [2024-11-06 14:32:54.350746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.887 [2024-11-06 14:32:54.350830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.887 [2024-11-06 14:32:54.350845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.887 [2024-11-06 14:32:54.353524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.887 [2024-11-06 14:32:54.353610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.887 [2024-11-06 14:32:54.353632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.887 [2024-11-06 14:32:54.356258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.887 [2024-11-06 14:32:54.356348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.887 [2024-11-06 14:32:54.356363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.887 [2024-11-06 14:32:54.359013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.887 [2024-11-06 14:32:54.359101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.887 [2024-11-06 14:32:54.359117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.887 [2024-11-06 14:32:54.361781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.887 [2024-11-06 14:32:54.361870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.887 [2024-11-06 14:32:54.361917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.887 [2024-11-06 14:32:54.364568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.887 [2024-11-06 14:32:54.364657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.887 [2024-11-06 14:32:54.364673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.887 [2024-11-06 14:32:54.367319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.887 [2024-11-06 14:32:54.367413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.887 [2024-11-06 14:32:54.367437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.887 [2024-11-06 14:32:54.370060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.887 [2024-11-06 14:32:54.370154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.887 [2024-11-06 14:32:54.370170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.887 [2024-11-06 14:32:54.372832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.887 [2024-11-06 14:32:54.372929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.887 [2024-11-06 14:32:54.372944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.887 [2024-11-06 14:32:54.375576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.887 [2024-11-06 14:32:54.375673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.888 [2024-11-06 14:32:54.375689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.888 [2024-11-06 14:32:54.378324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.888 [2024-11-06 14:32:54.378414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.888 [2024-11-06 14:32:54.378430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.888 [2024-11-06 14:32:54.381031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.888 [2024-11-06 14:32:54.381114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.888 [2024-11-06 14:32:54.381136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.888 [2024-11-06 14:32:54.383754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.888 [2024-11-06 14:32:54.383848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.888 [2024-11-06 14:32:54.383872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.888 [2024-11-06 14:32:54.386556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.888 [2024-11-06 14:32:54.386647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.888 [2024-11-06 14:32:54.386663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.888 [2024-11-06 14:32:54.389340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.888 [2024-11-06 14:32:54.389425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.888 [2024-11-06 14:32:54.389458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.888 [2024-11-06 14:32:54.392066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.888 [2024-11-06 14:32:54.392150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.888 [2024-11-06 14:32:54.392167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.888 [2024-11-06 14:32:54.394838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.888 [2024-11-06 14:32:54.394922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.888 [2024-11-06 14:32:54.394938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.888 [2024-11-06 14:32:54.397623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.888 [2024-11-06 14:32:54.397712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.888 [2024-11-06 14:32:54.397729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.888 [2024-11-06 14:32:54.400396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.888 [2024-11-06 14:32:54.400492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.888 [2024-11-06 14:32:54.400516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.888 [2024-11-06 14:32:54.403127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.888 [2024-11-06 14:32:54.403217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.888 [2024-11-06 14:32:54.403244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.888 [2024-11-06 14:32:54.405895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.888 [2024-11-06 14:32:54.406003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.888 [2024-11-06 14:32:54.406024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.888 [2024-11-06 14:32:54.408657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.888 [2024-11-06 14:32:54.408752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.888 [2024-11-06 14:32:54.408776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.888 [2024-11-06 14:32:54.411443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.888 [2024-11-06 14:32:54.411527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.888 [2024-11-06 14:32:54.411549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.888 [2024-11-06 14:32:54.414213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.888 [2024-11-06 14:32:54.414296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.888 [2024-11-06 14:32:54.414312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.888 [2024-11-06 14:32:54.416942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.888 [2024-11-06 14:32:54.417044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.888 [2024-11-06 14:32:54.417060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.888 [2024-11-06 14:32:54.419720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.888 [2024-11-06 14:32:54.419815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.888 [2024-11-06 14:32:54.419831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.888 [2024-11-06 14:32:54.422526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.888 [2024-11-06 14:32:54.422613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.888 [2024-11-06 14:32:54.422629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.888 [2024-11-06 14:32:54.425270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.888 [2024-11-06 14:32:54.425364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.888 [2024-11-06 14:32:54.425390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.888 [2024-11-06 14:32:54.428048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.888 [2024-11-06 14:32:54.428133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.888 [2024-11-06 14:32:54.428150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.888 [2024-11-06 14:32:54.430797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.888 [2024-11-06 14:32:54.430881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.888 [2024-11-06 14:32:54.430903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.888 [2024-11-06 14:32:54.433558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.888 [2024-11-06 14:32:54.433670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.888 [2024-11-06 14:32:54.433694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.888 [2024-11-06 14:32:54.436363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.888 [2024-11-06 14:32:54.436462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.888 [2024-11-06 14:32:54.436485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.888 [2024-11-06 14:32:54.439138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.888 [2024-11-06 14:32:54.439227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.888 [2024-11-06 14:32:54.439244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.888 [2024-11-06 14:32:54.441929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.888 [2024-11-06 14:32:54.442025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.888 [2024-11-06 14:32:54.442045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.888 [2024-11-06 14:32:54.444684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.888 [2024-11-06 14:32:54.444771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.888 [2024-11-06 14:32:54.444788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.888 [2024-11-06 14:32:54.447438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.888 [2024-11-06 14:32:54.447521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.888 [2024-11-06 14:32:54.447537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.888 [2024-11-06 14:32:54.450180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.888 [2024-11-06 14:32:54.450265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.889 [2024-11-06 14:32:54.450281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.889 [2024-11-06 14:32:54.452941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.889 [2024-11-06 14:32:54.453066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.889 [2024-11-06 14:32:54.453090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.889 [2024-11-06 14:32:54.455696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.889 [2024-11-06 14:32:54.455783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.889 [2024-11-06 14:32:54.455801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.889 [2024-11-06 14:32:54.458461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.889 [2024-11-06 14:32:54.458544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.889 [2024-11-06 14:32:54.458567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.889 [2024-11-06 14:32:54.461214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.889 [2024-11-06 14:32:54.461303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.889 [2024-11-06 14:32:54.461319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.889 [2024-11-06 14:32:54.463972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.889 [2024-11-06 14:32:54.464064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.889 [2024-11-06 14:32:54.464080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.889 [2024-11-06 14:32:54.466728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.889 [2024-11-06 14:32:54.466813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.889 [2024-11-06 14:32:54.466829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.889 [2024-11-06 14:32:54.469500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.889 [2024-11-06 14:32:54.469584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.889 [2024-11-06 14:32:54.469600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.889 [2024-11-06 14:32:54.472297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.889 [2024-11-06 14:32:54.472386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.889 [2024-11-06 14:32:54.472402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.889 [2024-11-06 14:32:54.475052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.889 [2024-11-06 14:32:54.475137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.889 [2024-11-06 14:32:54.475154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.889 [2024-11-06 14:32:54.477797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.889 [2024-11-06 14:32:54.477918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.889 [2024-11-06 14:32:54.477946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.889 [2024-11-06 14:32:54.480555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.889 [2024-11-06 14:32:54.480637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.889 [2024-11-06 14:32:54.480654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.889 [2024-11-06 14:32:54.483311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.889 [2024-11-06 14:32:54.483403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.889 [2024-11-06 14:32:54.483419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.889 [2024-11-06 14:32:54.486075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.889 [2024-11-06 14:32:54.486158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.889 [2024-11-06 14:32:54.486174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.889 [2024-11-06 14:32:54.488827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.889 [2024-11-06 14:32:54.488918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.889 [2024-11-06 14:32:54.488940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.889 [2024-11-06 14:32:54.491602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.889 [2024-11-06 14:32:54.491689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.889 [2024-11-06 14:32:54.491704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.889 [2024-11-06 14:32:54.494371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.889 [2024-11-06 14:32:54.494462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.889 [2024-11-06 14:32:54.494488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.889 [2024-11-06 14:32:54.497169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.889 [2024-11-06 14:32:54.497257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.889 [2024-11-06 14:32:54.497283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.889 [2024-11-06 14:32:54.499972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.889 [2024-11-06 14:32:54.500059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.889 [2024-11-06 14:32:54.500086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.889 [2024-11-06 14:32:54.502745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.889 [2024-11-06 14:32:54.502830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.889 [2024-11-06 14:32:54.502848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.889 [2024-11-06 14:32:54.505511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.889 [2024-11-06 14:32:54.505610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.889 [2024-11-06 14:32:54.505635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.889 [2024-11-06 14:32:54.508252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.889 [2024-11-06 14:32:54.508363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.889 [2024-11-06 14:32:54.508387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.889 [2024-11-06 14:32:54.511029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.889 [2024-11-06 14:32:54.511117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.889 [2024-11-06 14:32:54.511133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.889 [2024-11-06 14:32:54.513782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.889 [2024-11-06 14:32:54.513877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.889 [2024-11-06 14:32:54.513901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.889 [2024-11-06 14:32:54.516567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:26.889 [2024-11-06 14:32:54.516662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.889 [2024-11-06 14:32:54.516687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:27.150 [2024-11-06 14:32:54.519347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.150 [2024-11-06 14:32:54.519430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.150 [2024-11-06 14:32:54.519447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:27.150 [2024-11-06 14:32:54.522124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.150 [2024-11-06 14:32:54.522209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.150 [2024-11-06 14:32:54.522231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:27.150 [2024-11-06 14:32:54.524898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.150 [2024-11-06 14:32:54.524995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.150 [2024-11-06 14:32:54.525016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.150 [2024-11-06 14:32:54.527606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.150 [2024-11-06 14:32:54.527698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.150 [2024-11-06 14:32:54.527714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:27.150 [2024-11-06 14:32:54.530390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.151 [2024-11-06 14:32:54.530479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.151 [2024-11-06 14:32:54.530497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:27.151 [2024-11-06 14:32:54.533182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.151 [2024-11-06 14:32:54.533297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.151 [2024-11-06 14:32:54.533313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:27.151 [2024-11-06 14:32:54.535916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.151 [2024-11-06 14:32:54.536024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.151 [2024-11-06 14:32:54.536047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.151 [2024-11-06 14:32:54.538671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.151 [2024-11-06 14:32:54.538764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.151 [2024-11-06 14:32:54.538789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:27.151 [2024-11-06 14:32:54.541459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.151 [2024-11-06 14:32:54.541554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.151 [2024-11-06 14:32:54.541578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:27.151 [2024-11-06 14:32:54.544224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.151 [2024-11-06 14:32:54.544322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.151 [2024-11-06 14:32:54.544338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:27.151 [2024-11-06 14:32:54.546926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.151 [2024-11-06 14:32:54.547033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.151 [2024-11-06 14:32:54.547050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.151 [2024-11-06 14:32:54.549734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.151 [2024-11-06 14:32:54.549818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.151 [2024-11-06 14:32:54.549834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:27.151 [2024-11-06 14:32:54.552531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.151 [2024-11-06 14:32:54.552616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.151 [2024-11-06 14:32:54.552634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:27.151 [2024-11-06 14:32:54.555275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.151 [2024-11-06 14:32:54.555358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.151 [2024-11-06 14:32:54.555380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:27.151 [2024-11-06 14:32:54.558029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.151 [2024-11-06 14:32:54.558112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.151 [2024-11-06 14:32:54.558130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.151 [2024-11-06 14:32:54.560808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.151 [2024-11-06 14:32:54.560897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.151 [2024-11-06 14:32:54.560922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:27.151 [2024-11-06 14:32:54.563565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.151 [2024-11-06 14:32:54.563660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.151 [2024-11-06 14:32:54.563685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:27.151 [2024-11-06 14:32:54.566365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.151 [2024-11-06 14:32:54.566450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.151 [2024-11-06 14:32:54.566480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:27.151 [2024-11-06 14:32:54.569164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.151 [2024-11-06 14:32:54.569255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.151 [2024-11-06 14:32:54.569282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.151 [2024-11-06 14:32:54.571904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.151 [2024-11-06 14:32:54.572000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.151 [2024-11-06 14:32:54.572021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:27.151 [2024-11-06 14:32:54.574689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.151 [2024-11-06 14:32:54.574776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.151 [2024-11-06 14:32:54.574793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:27.151 [2024-11-06 14:32:54.577486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.151 [2024-11-06 14:32:54.577570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.151 [2024-11-06 14:32:54.577588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:27.151 [2024-11-06 14:32:54.580195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.151 [2024-11-06 14:32:54.580329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.151 [2024-11-06 14:32:54.580351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.151 [2024-11-06 14:32:54.582898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.151 [2024-11-06 14:32:54.583025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.151 [2024-11-06 14:32:54.583041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:27.151 [2024-11-06 14:32:54.585674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.151 [2024-11-06 14:32:54.585757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.151 [2024-11-06 14:32:54.585776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:27.151 [2024-11-06 14:32:54.588477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.151 [2024-11-06 14:32:54.588592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.151 [2024-11-06 14:32:54.588616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:27.151 [2024-11-06 14:32:54.591220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.151 [2024-11-06 14:32:54.591316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.151 [2024-11-06 14:32:54.591332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.151 [2024-11-06 14:32:54.594013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.151 [2024-11-06 14:32:54.594097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.151 [2024-11-06 14:32:54.594119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:27.151 [2024-11-06 14:32:54.596740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.151 [2024-11-06 14:32:54.596830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.151 [2024-11-06 14:32:54.596854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:27.151 [2024-11-06 14:32:54.599521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.151 [2024-11-06 14:32:54.599605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.151 [2024-11-06 14:32:54.599622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:27.151 [2024-11-06 14:32:54.602290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.151 [2024-11-06 14:32:54.602374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.151 [2024-11-06 14:32:54.602391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.151 [2024-11-06 14:32:54.605048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.152 [2024-11-06 14:32:54.605139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.152 [2024-11-06 14:32:54.605156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:27.152 [2024-11-06 14:32:54.607777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.152 [2024-11-06 14:32:54.607875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.152 [2024-11-06 14:32:54.607898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:27.152 [2024-11-06 14:32:54.610574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.152 [2024-11-06 14:32:54.610663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.152 [2024-11-06 14:32:54.610688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:27.152 [2024-11-06 14:32:54.613321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.152 [2024-11-06 14:32:54.613419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.152 [2024-11-06 14:32:54.613446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.152 [2024-11-06 14:32:54.616089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.152 [2024-11-06 14:32:54.616180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.152 [2024-11-06 14:32:54.616204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:27.152 [2024-11-06 14:32:54.618846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.152 [2024-11-06 14:32:54.618929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.152 [2024-11-06 14:32:54.618963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:27.152 [2024-11-06 14:32:54.621627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.152 [2024-11-06 14:32:54.621722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.152 [2024-11-06 14:32:54.621746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:27.152 [2024-11-06 14:32:54.624414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.152 [2024-11-06 14:32:54.624499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.152 [2024-11-06 14:32:54.624522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.152 [2024-11-06 14:32:54.627178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.152 [2024-11-06 14:32:54.627262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.152 [2024-11-06 14:32:54.627286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:27.152 [2024-11-06 14:32:54.629966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.152 [2024-11-06 14:32:54.630048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.152 [2024-11-06 14:32:54.630071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:27.152 [2024-11-06 14:32:54.632721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.152 [2024-11-06 14:32:54.632805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.152 [2024-11-06 14:32:54.632822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:27.152 [2024-11-06 14:32:54.635480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.152 [2024-11-06 14:32:54.635565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.152 [2024-11-06 14:32:54.635581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.152 [2024-11-06 14:32:54.638265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.152 [2024-11-06 14:32:54.638350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.152 [2024-11-06 14:32:54.638367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:27.152 [2024-11-06 14:32:54.641042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.152 [2024-11-06 14:32:54.641127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.152 [2024-11-06 14:32:54.641144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:27.152 [2024-11-06 14:32:54.643764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.152 [2024-11-06 14:32:54.643848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.152 [2024-11-06 14:32:54.643864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:27.152 [2024-11-06 14:32:54.646541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.152 [2024-11-06 14:32:54.646627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.152 [2024-11-06 14:32:54.646643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.152 [2024-11-06 14:32:54.649310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.152 [2024-11-06 14:32:54.649399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.152 [2024-11-06 14:32:54.649415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:27.152 [2024-11-06 14:32:54.652090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.152 [2024-11-06 14:32:54.652180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.152 [2024-11-06 14:32:54.652207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:27.152 [2024-11-06 14:32:54.654826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.152 [2024-11-06 14:32:54.654910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.152 [2024-11-06 14:32:54.654926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:27.152 [2024-11-06 14:32:54.657614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.152 [2024-11-06 14:32:54.657698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.152 [2024-11-06 14:32:54.657715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.152 [2024-11-06 14:32:54.660526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.152 [2024-11-06 14:32:54.660638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.152 [2024-11-06 14:32:54.660666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:27.152 [2024-11-06 14:32:54.663387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.152 [2024-11-06 14:32:54.663489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.152 [2024-11-06 14:32:54.663506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:27.152 [2024-11-06 14:32:54.666165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.152 [2024-11-06 14:32:54.666262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.152 [2024-11-06 14:32:54.666279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:27.152 [2024-11-06 14:32:54.668897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.152 [2024-11-06 14:32:54.669058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.152 [2024-11-06 14:32:54.669086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.152 [2024-11-06 14:32:54.671648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.152 [2024-11-06 14:32:54.671734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.152 [2024-11-06 14:32:54.671750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:27.152 [2024-11-06 14:32:54.674448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.152 [2024-11-06 14:32:54.674531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.152 [2024-11-06 14:32:54.674548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:27.152 [2024-11-06 14:32:54.677229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.152 [2024-11-06 14:32:54.677313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.152 [2024-11-06 14:32:54.677329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:27.152 [2024-11-06 14:32:54.680003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.153 [2024-11-06 14:32:54.680092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.153 [2024-11-06 14:32:54.680108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.153 [2024-11-06 14:32:54.682760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.153 [2024-11-06 14:32:54.682856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.153 [2024-11-06 14:32:54.682872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:27.153 [2024-11-06 14:32:54.685594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.153 [2024-11-06 14:32:54.685686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.153 [2024-11-06 14:32:54.685702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:27.153 [2024-11-06 14:32:54.688373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.153 [2024-11-06 14:32:54.688470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.153 [2024-11-06 14:32:54.688487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:27.153 [2024-11-06 14:32:54.691138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.153 [2024-11-06 14:32:54.691224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.153 [2024-11-06 14:32:54.691240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.153 [2024-11-06 14:32:54.693906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.153 [2024-11-06 14:32:54.694017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.153 [2024-11-06 14:32:54.694034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:27.153 [2024-11-06 14:32:54.696720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.153 [2024-11-06 14:32:54.696806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.153 [2024-11-06 14:32:54.696822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:27.153 [2024-11-06 14:32:54.699462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.153 [2024-11-06 14:32:54.699555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.153 [2024-11-06 14:32:54.699571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:27.153 [2024-11-06 14:32:54.702250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.153 [2024-11-06 14:32:54.702338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.153 [2024-11-06 14:32:54.702355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.153 [2024-11-06 14:32:54.705043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.153 [2024-11-06 14:32:54.705162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.153 [2024-11-06 14:32:54.705190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:27.153 [2024-11-06 14:32:54.707744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.153 [2024-11-06 14:32:54.707837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.153 [2024-11-06 14:32:54.707853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:27.153 [2024-11-06 14:32:54.710547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.153 [2024-11-06 14:32:54.710639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.153 [2024-11-06 14:32:54.710661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:27.153 [2024-11-06 14:32:54.713387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.153 [2024-11-06 14:32:54.713471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.153 [2024-11-06 14:32:54.713487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.153 [2024-11-06 14:32:54.716136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.153 [2024-11-06 14:32:54.716234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.153 [2024-11-06 14:32:54.716251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:27.153 [2024-11-06 14:32:54.718886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.153 [2024-11-06 14:32:54.718991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.153 [2024-11-06 14:32:54.719007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:27.153 [2024-11-06 14:32:54.721717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.153 [2024-11-06 14:32:54.721840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.153 [2024-11-06 14:32:54.721867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:27.153 [2024-11-06 14:32:54.724495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.153 [2024-11-06 14:32:54.724586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.153 [2024-11-06 14:32:54.724602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.153 [2024-11-06 14:32:54.727253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.153 [2024-11-06 14:32:54.727339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.153 [2024-11-06 14:32:54.727355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:27.153 [2024-11-06 14:32:54.730068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.153 [2024-11-06 14:32:54.730152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.153 [2024-11-06 14:32:54.730168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:27.153 [2024-11-06 14:32:54.732865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.153 [2024-11-06 14:32:54.732967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.153 [2024-11-06 14:32:54.732985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:27.153 [2024-11-06 14:32:54.735629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.153 [2024-11-06 14:32:54.735724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.153 [2024-11-06 14:32:54.735740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.153 [2024-11-06 14:32:54.738425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.153 [2024-11-06 14:32:54.738514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.153 [2024-11-06 14:32:54.738531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:27.153 [2024-11-06 14:32:54.741200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.153 [2024-11-06 14:32:54.741284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.153 [2024-11-06 14:32:54.741300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:27.153 [2024-11-06 14:32:54.743925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.153 [2024-11-06 14:32:54.744020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.153 [2024-11-06 14:32:54.744045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:27.153 [2024-11-06 14:32:54.746706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.153 [2024-11-06 14:32:54.746801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.153 [2024-11-06 14:32:54.746846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.153 [2024-11-06 14:32:54.749485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.153 [2024-11-06 14:32:54.749570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.153 [2024-11-06 14:32:54.749586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:27.153 [2024-11-06 14:32:54.752213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.153 [2024-11-06 14:32:54.752303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.153 [2024-11-06 14:32:54.752319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:27.153 [2024-11-06 14:32:54.754982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.154 [2024-11-06 14:32:54.755074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.154 [2024-11-06 14:32:54.755090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:27.154 [2024-11-06 14:32:54.757757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.154 [2024-11-06 14:32:54.757851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.154 [2024-11-06 14:32:54.757866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.154 [2024-11-06 14:32:54.760577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.154 [2024-11-06 14:32:54.760667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.154 [2024-11-06 14:32:54.760683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:27.154 [2024-11-06 14:32:54.763346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.154 [2024-11-06 14:32:54.763430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.154 [2024-11-06 14:32:54.763447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:27.154 [2024-11-06 14:32:54.766128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.154 [2024-11-06 14:32:54.766211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.154 [2024-11-06 14:32:54.766228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:27.154 [2024-11-06 14:32:54.768894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.154 [2024-11-06 14:32:54.768997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.154 [2024-11-06 14:32:54.769013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.154 [2024-11-06 14:32:54.771672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.154 [2024-11-06 14:32:54.771771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.154 [2024-11-06 14:32:54.771793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:27.154 [2024-11-06 14:32:54.774450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.154 [2024-11-06 14:32:54.774542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.154 [2024-11-06 14:32:54.774559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:27.154 [2024-11-06 14:32:54.777242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.154 [2024-11-06 14:32:54.777332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.154 [2024-11-06 14:32:54.777348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:27.154 [2024-11-06 14:32:54.779999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.154 [2024-11-06 14:32:54.780092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.154 [2024-11-06 14:32:54.780108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.412 [2024-11-06 14:32:54.782785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.412 [2024-11-06 14:32:54.782876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.412 [2024-11-06 14:32:54.782892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:27.412 [2024-11-06 14:32:54.785565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.412 [2024-11-06 14:32:54.785656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.412 [2024-11-06 14:32:54.785673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:27.413 [2024-11-06 14:32:54.788341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.413 [2024-11-06 14:32:54.788423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.413 [2024-11-06 14:32:54.788440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:27.413 [2024-11-06 14:32:54.791115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.413 [2024-11-06 14:32:54.791232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.413 [2024-11-06 14:32:54.791260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.413 [2024-11-06 14:32:54.793909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.413 [2024-11-06 14:32:54.794017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.413 [2024-11-06 14:32:54.794040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:27.413 [2024-11-06 14:32:54.796682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.413 [2024-11-06 14:32:54.796773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.413 [2024-11-06 14:32:54.796790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:27.413 [2024-11-06 14:32:54.799478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.413 [2024-11-06 14:32:54.799564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.413 [2024-11-06 14:32:54.799586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:27.413 [2024-11-06 14:32:54.802276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.413 [2024-11-06 14:32:54.802359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.413 [2024-11-06 14:32:54.802375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.413 [2024-11-06 14:32:54.805030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.413 [2024-11-06 14:32:54.805113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.413 [2024-11-06 14:32:54.805129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:27.413 [2024-11-06 14:32:54.807800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.413 [2024-11-06 14:32:54.807882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.413 [2024-11-06 14:32:54.807899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:27.413 [2024-11-06 14:32:54.810574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.413 [2024-11-06 14:32:54.810664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.413 [2024-11-06 14:32:54.810680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:27.413 [2024-11-06 14:32:54.813325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.413 [2024-11-06 14:32:54.813423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.413 [2024-11-06 14:32:54.813441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.413 [2024-11-06 14:32:54.816093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.413 [2024-11-06 14:32:54.816178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.413 [2024-11-06 14:32:54.816194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:27.413 [2024-11-06 14:32:54.818858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.413 [2024-11-06 14:32:54.818978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.413 [2024-11-06 14:32:54.818994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:27.413 11078.00 IOPS, 1384.75 MiB/s [2024-11-06T14:32:55.048Z] [2024-11-06 14:32:54.822427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:28:27.413 [2024-11-06 14:32:54.822484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.413 [2024-11-06 14:32:54.822501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:27.413 00:28:27.413 Latency(us) 00:28:27.413 [2024-11-06T14:32:55.048Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:27.413 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:27.413 nvme0n1 : 2.00 11073.13 1384.14 0.00 0.00 1441.60 945.23 3705.30 00:28:27.413 [2024-11-06T14:32:55.048Z] =================================================================================================================== 00:28:27.413 [2024-11-06T14:32:55.048Z] Total : 11073.13 1384.14 0.00 0.00 1441.60 945.23 3705.30 00:28:27.413 { 00:28:27.413 "results": [ 00:28:27.413 { 00:28:27.413 "job": "nvme0n1", 00:28:27.413 "core_mask": "0x2", 00:28:27.413 "workload": "randwrite", 00:28:27.413 "status": "finished", 00:28:27.413 "queue_depth": 16, 00:28:27.413 "io_size": 131072, 00:28:27.413 "runtime": 2.002234, 00:28:27.413 "iops": 11073.131312324134, 00:28:27.413 "mibps": 1384.1414140405168, 00:28:27.413 "io_failed": 0, 00:28:27.413 "io_timeout": 0, 00:28:27.413 "avg_latency_us": 1441.6043266498511, 00:28:27.413 "min_latency_us": 945.2307692307693, 00:28:27.413 "max_latency_us": 3705.3046153846153 00:28:27.413 } 00:28:27.413 ], 00:28:27.413 "core_count": 1 00:28:27.413 } 00:28:27.413 14:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:27.413 14:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:27.413 14:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:27.413 14:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:27.413 | .driver_specific 00:28:27.413 | .nvme_error 00:28:27.413 | .status_code 00:28:27.413 | .command_transient_transport_error' 00:28:27.671 14:32:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 715 > 0 )) 00:28:27.671 14:32:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 103676 00:28:27.671 14:32:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 103676 ']' 00:28:27.671 14:32:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 103676 00:28:27.671 14:32:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:28:27.671 14:32:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:27.671 14:32:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 103676 00:28:27.671 14:32:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:27.671 killing process with pid 103676 00:28:27.671 14:32:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:27.671 14:32:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 103676' 00:28:27.671 Received shutdown signal, test time was about 2.000000 seconds 00:28:27.671 00:28:27.671 Latency(us) 00:28:27.671 [2024-11-06T14:32:55.306Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:27.671 [2024-11-06T14:32:55.306Z] =================================================================================================================== 00:28:27.671 [2024-11-06T14:32:55.306Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:27.671 14:32:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 103676 00:28:27.671 14:32:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 103676 00:28:28.237 14:32:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 103358 00:28:28.237 14:32:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 103358 ']' 00:28:28.237 14:32:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 103358 00:28:28.237 14:32:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:28:28.237 14:32:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:28.237 14:32:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 103358 00:28:28.237 14:32:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:28.237 14:32:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:28.237 14:32:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 103358' 00:28:28.237 killing process with pid 103358 00:28:28.237 14:32:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 103358 00:28:28.237 14:32:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 103358 00:28:28.805 00:28:28.805 real 0m18.820s 00:28:28.805 user 0m35.968s 00:28:28.805 sys 0m3.599s 00:28:28.805 14:32:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:28.805 ************************************ 00:28:28.805 END TEST nvmf_digest_error 00:28:28.805 ************************************ 00:28:28.805 14:32:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:28.805 14:32:56 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:28:28.805 14:32:56 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:28:28.805 14:32:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:28.805 14:32:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:28:28.805 14:32:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:28.805 14:32:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:28:28.805 14:32:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:28.805 14:32:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:28.805 rmmod nvme_tcp 00:28:28.805 rmmod nvme_fabrics 00:28:28.805 rmmod nvme_keyring 00:28:28.805 14:32:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:28.805 14:32:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:28:28.805 14:32:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:28:28.805 14:32:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 103358 ']' 00:28:28.805 14:32:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 103358 00:28:28.805 14:32:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@952 -- # '[' -z 103358 ']' 00:28:28.805 14:32:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@956 -- # kill -0 103358 00:28:28.805 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (103358) - No such process 00:28:28.805 Process with pid 103358 is not found 00:28:28.805 14:32:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@979 -- # echo 'Process with pid 103358 is not found' 00:28:28.805 14:32:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:28.805 14:32:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:28.805 14:32:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:28.805 14:32:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:28:28.805 14:32:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:28.805 14:32:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:28:28.805 14:32:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:28:28.805 14:32:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:28.805 14:32:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:28:28.805 14:32:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:28:28.805 14:32:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:28:28.805 14:32:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:28:29.065 14:32:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:28:29.065 14:32:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:28:29.065 14:32:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:28:29.065 14:32:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:28:29.065 14:32:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:28:29.065 14:32:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:28:29.065 14:32:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:28:29.065 14:32:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:28:29.065 14:32:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:29.065 14:32:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:29.065 14:32:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:28:29.065 14:32:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:29.065 14:32:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:29.065 14:32:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:29.065 14:32:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:28:29.065 00:28:29.065 real 0m39.157s 00:28:29.065 user 1m13.065s 00:28:29.065 sys 0m7.583s 00:28:29.065 ************************************ 00:28:29.065 END TEST nvmf_digest 00:28:29.065 ************************************ 00:28:29.065 14:32:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:29.065 14:32:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:29.065 14:32:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 1 -eq 1 ]] 00:28:29.065 14:32:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ tcp == \t\c\p ]] 00:28:29.065 14:32:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@38 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:28:29.065 14:32:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:28:29.065 14:32:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:29.065 14:32:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.065 ************************************ 00:28:29.065 START TEST nvmf_mdns_discovery 00:28:29.065 ************************************ 00:28:29.065 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:28:29.324 * Looking for test storage... 00:28:29.324 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:28:29.324 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:29.324 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:28:29.324 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:29.324 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:29.324 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:29.324 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:29.324 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:29.324 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:28:29.324 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:28:29.324 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:28:29.324 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:28:29.324 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:28:29.324 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:28:29.324 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:28:29.324 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:29.324 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@344 -- # case "$op" in 00:28:29.324 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@345 -- # : 1 00:28:29.324 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:29.324 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:29.324 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@365 -- # decimal 1 00:28:29.324 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@353 -- # local d=1 00:28:29.324 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:29.324 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@355 -- # echo 1 00:28:29.324 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:28:29.324 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@366 -- # decimal 2 00:28:29.324 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@353 -- # local d=2 00:28:29.324 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:29.324 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@355 -- # echo 2 00:28:29.324 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:28:29.324 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:29.324 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:29.324 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@368 -- # return 0 00:28:29.324 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:29.324 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:29.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:29.324 --rc genhtml_branch_coverage=1 00:28:29.324 --rc genhtml_function_coverage=1 00:28:29.324 --rc genhtml_legend=1 00:28:29.324 --rc geninfo_all_blocks=1 00:28:29.324 --rc geninfo_unexecuted_blocks=1 00:28:29.324 00:28:29.324 ' 00:28:29.324 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:29.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:29.324 --rc genhtml_branch_coverage=1 00:28:29.324 --rc genhtml_function_coverage=1 00:28:29.324 --rc genhtml_legend=1 00:28:29.324 --rc geninfo_all_blocks=1 00:28:29.324 --rc geninfo_unexecuted_blocks=1 00:28:29.324 00:28:29.324 ' 00:28:29.324 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:29.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:29.324 --rc genhtml_branch_coverage=1 00:28:29.324 --rc genhtml_function_coverage=1 00:28:29.324 --rc genhtml_legend=1 00:28:29.324 --rc geninfo_all_blocks=1 00:28:29.324 --rc geninfo_unexecuted_blocks=1 00:28:29.324 00:28:29.324 ' 00:28:29.324 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:29.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:29.324 --rc genhtml_branch_coverage=1 00:28:29.324 --rc genhtml_function_coverage=1 00:28:29.324 --rc genhtml_legend=1 00:28:29.324 --rc geninfo_all_blocks=1 00:28:29.324 --rc geninfo_unexecuted_blocks=1 00:28:29.324 00:28:29.324 ' 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # uname -s 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@5 -- # export PATH 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@51 -- # : 0 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:29.325 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@13 -- # DISCOVERY_FILTER=address 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@14 -- # DISCOVERY_PORT=8009 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@18 -- # NQN=nqn.2016-06.io.spdk:cnode 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@19 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@21 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@22 -- # HOST_SOCK=/tmp/host.sock 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@24 -- # nvmftestinit 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:28:29.325 Cannot find device "nvmf_init_br" 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # true 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:28:29.325 Cannot find device "nvmf_init_br2" 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # true 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:28:29.325 Cannot find device "nvmf_tgt_br" 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@164 -- # true 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:28:29.325 Cannot find device "nvmf_tgt_br2" 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@165 -- # true 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:28:29.325 Cannot find device "nvmf_init_br" 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # true 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:28:29.325 Cannot find device "nvmf_init_br2" 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@167 -- # true 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:28:29.325 Cannot find device "nvmf_tgt_br" 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@168 -- # true 00:28:29.325 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:28:29.325 Cannot find device "nvmf_tgt_br2" 00:28:29.326 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@169 -- # true 00:28:29.326 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:28:29.326 Cannot find device "nvmf_br" 00:28:29.326 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # true 00:28:29.326 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:28:29.326 Cannot find device "nvmf_init_if" 00:28:29.326 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@171 -- # true 00:28:29.326 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:28:29.326 Cannot find device "nvmf_init_if2" 00:28:29.326 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@172 -- # true 00:28:29.326 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:29.326 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:29.326 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@173 -- # true 00:28:29.326 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:29.326 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:29.326 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # true 00:28:29.326 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:28:29.326 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:29.326 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:28:29.326 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:29.584 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:29.584 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:29.584 14:32:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:29.584 14:32:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:29.584 14:32:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:28:29.584 14:32:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:28:29.584 14:32:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:28:29.584 14:32:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:28:29.584 14:32:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:28:29.584 14:32:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:28:29.584 14:32:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:28:29.584 14:32:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:28:29.584 14:32:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:28:29.584 14:32:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:29.584 14:32:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:29.584 14:32:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:29.584 14:32:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:28:29.584 14:32:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:28:29.584 14:32:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:28:29.584 14:32:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:28:29.584 14:32:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:29.584 14:32:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:29.584 14:32:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:29.585 14:32:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:28:29.585 14:32:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:28:29.585 14:32:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:28:29.585 14:32:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:29.585 14:32:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:28:29.585 14:32:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:28:29.585 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:29.585 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:28:29.585 00:28:29.585 --- 10.0.0.3 ping statistics --- 00:28:29.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:29.585 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:28:29.585 14:32:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:28:29.585 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:28:29.585 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.032 ms 00:28:29.585 00:28:29.585 --- 10.0.0.4 ping statistics --- 00:28:29.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:29.585 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:28:29.585 14:32:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:29.585 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:29.585 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:28:29.585 00:28:29.585 --- 10.0.0.1 ping statistics --- 00:28:29.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:29.585 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:28:29.585 14:32:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:28:29.585 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:29.585 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:28:29.585 00:28:29.585 --- 10.0.0.2 ping statistics --- 00:28:29.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:29.585 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:28:29.585 14:32:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:29.585 14:32:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@461 -- # return 0 00:28:29.585 14:32:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:29.585 14:32:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:29.585 14:32:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:29.585 14:32:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:29.585 14:32:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:29.585 14:32:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:29.585 14:32:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:29.585 14:32:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@29 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:28:29.585 14:32:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:29.585 14:32:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:29.585 14:32:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:29.585 14:32:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@509 -- # nvmfpid=104028 00:28:29.585 14:32:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@510 -- # waitforlisten 104028 00:28:29.585 14:32:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:28:29.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:29.585 14:32:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@833 -- # '[' -z 104028 ']' 00:28:29.585 14:32:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:29.585 14:32:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:29.585 14:32:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:29.585 14:32:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:29.585 14:32:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:29.843 [2024-11-06 14:32:57.226829] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:28:29.843 [2024-11-06 14:32:57.226914] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:29.843 [2024-11-06 14:32:57.375394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:29.843 [2024-11-06 14:32:57.450061] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:29.843 [2024-11-06 14:32:57.450101] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:29.843 [2024-11-06 14:32:57.450110] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:29.843 [2024-11-06 14:32:57.450121] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:29.843 [2024-11-06 14:32:57.450129] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:29.843 [2024-11-06 14:32:57.450781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:30.778 14:32:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:30.778 14:32:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@866 -- # return 0 00:28:30.778 14:32:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:30.778 14:32:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:30.778 14:32:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:30.778 14:32:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:30.778 14:32:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@31 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:28:30.778 14:32:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.778 14:32:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:30.778 14:32:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.778 14:32:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@32 -- # rpc_cmd framework_start_init 00:28:30.778 14:32:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.778 14:32:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:30.778 14:32:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.778 14:32:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:30.778 14:32:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.778 14:32:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:30.778 [2024-11-06 14:32:58.302519] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:30.778 14:32:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.778 14:32:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:28:30.778 14:32:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.778 14:32:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:30.778 [2024-11-06 14:32:58.310628] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:28:30.778 14:32:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.778 14:32:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null0 1000 512 00:28:30.778 14:32:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.778 14:32:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:30.778 null0 00:28:30.778 14:32:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.778 14:32:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null1 1000 512 00:28:30.778 14:32:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.778 14:32:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:30.778 null1 00:28:30.778 14:32:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.778 14:32:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null2 1000 512 00:28:30.778 14:32:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.778 14:32:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:30.778 null2 00:28:30.778 14:32:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.778 14:32:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_null_create null3 1000 512 00:28:30.778 14:32:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.778 14:32:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:30.778 null3 00:28:30.778 14:32:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.778 14:32:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@40 -- # rpc_cmd bdev_wait_for_examine 00:28:30.778 14:32:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.778 14:32:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:30.778 14:32:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.778 14:32:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@48 -- # hostpid=104078 00:28:30.778 14:32:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@49 -- # waitforlisten 104078 /tmp/host.sock 00:28:30.778 14:32:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@833 -- # '[' -z 104078 ']' 00:28:30.778 14:32:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:28:30.778 14:32:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:28:30.778 14:32:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:30.778 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:28:30.778 14:32:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:28:30.778 14:32:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:30.778 14:32:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:31.037 [2024-11-06 14:32:58.429967] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:28:31.037 [2024-11-06 14:32:58.430082] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104078 ] 00:28:31.037 [2024-11-06 14:32:58.587424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:31.295 [2024-11-06 14:32:58.681615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:31.861 14:32:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:31.861 14:32:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@866 -- # return 0 00:28:31.861 14:32:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:28:31.861 14:32:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@52 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahipid;' EXIT 00:28:31.861 14:32:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@56 -- # avahi-daemon --kill 00:28:31.861 14:32:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@58 -- # avahipid=104108 00:28:31.861 14:32:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@59 -- # sleep 1 00:28:31.861 14:32:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:28:31.861 14:32:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:28:31.861 Process 1072 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:28:31.861 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:28:31.861 Successfully dropped root privileges. 00:28:31.861 avahi-daemon 0.8 starting up. 00:28:31.861 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:28:31.861 Successfully called chroot(). 00:28:31.861 Successfully dropped remaining capabilities. 00:28:32.796 No service file found in /etc/avahi/services. 00:28:32.796 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.4. 00:28:32.796 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:28:32.796 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.3. 00:28:32.796 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:28:32.796 Network interface enumeration completed. 00:28:32.796 Registering new address record for fe80::6084:d4ff:fe9b:2260 on nvmf_tgt_if2.*. 00:28:32.796 Registering new address record for 10.0.0.4 on nvmf_tgt_if2.IPv4. 00:28:32.796 Registering new address record for fe80::3c47:c4ff:feac:c7a5 on nvmf_tgt_if.*. 00:28:32.796 Registering new address record for 10.0.0.3 on nvmf_tgt_if.IPv4. 00:28:32.796 Server startup complete. Host name is fedora39-cloud-1721788873-2326.local. Local service cookie is 1299755576. 00:28:32.796 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:28:32.796 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.796 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:32.796 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.796 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@62 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:28:32.796 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.796 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:32.796 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.796 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@114 -- # notify_id=0 00:28:32.796 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@120 -- # get_subsystem_names 00:28:32.796 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:28:32.797 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:32.797 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:28:32.797 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.797 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:28:32.797 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:32.797 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:33.055 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@120 -- # [[ '' == '' ]] 00:28:33.055 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # get_bdev_list 00:28:33.055 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:33.055 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:33.055 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:33.055 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:28:33.055 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:28:33.055 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:28:33.055 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:33.055 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # [[ '' == '' ]] 00:28:33.055 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@123 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:28:33.055 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:33.055 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:33.055 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:33.055 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # get_subsystem_names 00:28:33.055 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:33.055 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:33.055 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:33.055 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:28:33.055 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:28:33.055 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:28:33.055 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:33.055 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # [[ '' == '' ]] 00:28:33.055 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # get_bdev_list 00:28:33.055 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:33.055 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:33.055 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:33.055 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:28:33.055 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:28:33.055 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:28:33.055 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:33.056 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # [[ '' == '' ]] 00:28:33.056 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:28:33.056 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:33.056 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:33.056 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:33.056 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # get_subsystem_names 00:28:33.056 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:28:33.056 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:28:33.056 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:33.056 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:33.056 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:28:33.056 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:33.056 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:33.056 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # [[ '' == '' ]] 00:28:33.056 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # get_bdev_list 00:28:33.056 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:28:33.056 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:33.056 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:33.056 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:28:33.056 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:33.056 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:28:33.056 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:33.056 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # [[ '' == '' ]] 00:28:33.056 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@133 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:28:33.056 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:33.056 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:33.056 [2024-11-06 14:33:00.623178] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:28:33.056 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:33.056 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:28:33.056 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:33.056 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:33.056 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:33.056 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@140 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:28:33.056 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:33.056 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:33.056 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:33.056 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:28:33.056 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:33.056 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:33.056 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:33.056 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@145 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:28:33.056 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:33.056 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:33.056 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:33.056 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_publish_mdns_prr 00:28:33.056 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:33.056 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:33.056 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:33.056 14:33:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@149 -- # sleep 5 00:28:33.314 [2024-11-06 14:33:00.698647] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:28:34.249 [2024-11-06 14:33:01.598661] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:28:34.508 [2024-11-06 14:33:01.998679] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:28:34.508 [2024-11-06 14:33:01.998714] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:28:34.508 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:28:34.508 cookie is 0 00:28:34.508 is_local: 1 00:28:34.508 our_own: 0 00:28:34.508 wide_area: 0 00:28:34.508 multicast: 1 00:28:34.508 cached: 1 00:28:34.508 [2024-11-06 14:33:02.098673] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:28:34.508 [2024-11-06 14:33:02.098698] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:28:34.508 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:28:34.508 cookie is 0 00:28:34.508 is_local: 1 00:28:34.508 our_own: 0 00:28:34.508 wide_area: 0 00:28:34.508 multicast: 1 00:28:34.508 cached: 1 00:28:35.442 [2024-11-06 14:33:02.999782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.442 [2024-11-06 14:33:02.999834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.4, port=8009 00:28:35.442 [2024-11-06 14:33:02.999876] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:35.442 [2024-11-06 14:33:02.999888] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:35.442 [2024-11-06 14:33:02.999897] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.4:8009] could not start discovery connect 00:28:35.700 [2024-11-06 14:33:03.109239] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:28:35.700 [2024-11-06 14:33:03.109266] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:28:35.700 [2024-11-06 14:33:03.109286] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:28:35.701 [2024-11-06 14:33:03.195376] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem mdns1_nvme0 00:28:35.701 [2024-11-06 14:33:03.249725] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:28:35.701 [2024-11-06 14:33:03.250510] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x61500002b500:1 started. 00:28:35.701 [2024-11-06 14:33:03.252045] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns1_nvme0 done 00:28:35.701 [2024-11-06 14:33:03.252070] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:28:35.701 [2024-11-06 14:33:03.258071] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x61500002b500 was disconnected and freed. delete nvme_qpair. 00:28:36.634 [2024-11-06 14:33:03.999707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.634 [2024-11-06 14:33:03.999747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002b780 with addr=10.0.0.4, port=8009 00:28:36.634 [2024-11-06 14:33:03.999781] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:36.634 [2024-11-06 14:33:03.999790] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:36.634 [2024-11-06 14:33:03.999797] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.4:8009] could not start discovery connect 00:28:37.567 [2024-11-06 14:33:04.999717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.567 [2024-11-06 14:33:04.999755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ba00 with addr=10.0.0.4, port=8009 00:28:37.567 [2024-11-06 14:33:04.999789] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:37.567 [2024-11-06 14:33:04.999797] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:37.567 [2024-11-06 14:33:04.999804] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.4:8009] could not start discovery connect 00:28:38.134 14:33:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # check_mdns_request_exists spdk1 10.0.0.4 8009 'not found' 00:28:38.134 14:33:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:28:38.134 14:33:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.4 00:28:38.134 14:33:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:28:38.134 14:33:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local 'check_type=not found' 00:28:38.134 14:33:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:28:38.134 14:33:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:28:38.134 14:33:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:28:38.134 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:28:38.134 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:28:38.134 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:28:38.134 14:33:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:28:38.134 14:33:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:28:38.134 14:33:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:28:38.134 14:33:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:28:38.134 14:33:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:28:38.134 14:33:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:28:38.134 14:33:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:28:38.134 14:33:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:28:38.134 14:33:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:28:38.134 14:33:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # [[ not found == \f\o\u\n\d ]] 00:28:38.134 14:33:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@108 -- # return 0 00:28:38.134 14:33:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.4 -s 8009 00:28:38.134 14:33:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.134 14:33:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:38.134 [2024-11-06 14:33:05.700967] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 8009 *** 00:28:38.134 [2024-11-06 14:33:05.702527] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:28:38.134 [2024-11-06 14:33:05.702566] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:28:38.134 14:33:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.134 14:33:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@156 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.4 -s 4420 00:28:38.134 14:33:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.134 14:33:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:38.134 [2024-11-06 14:33:05.708868] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:28:38.134 [2024-11-06 14:33:05.709546] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:28:38.134 14:33:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.134 14:33:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@157 -- # sleep 1 00:28:38.393 [2024-11-06 14:33:05.839642] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:28:38.393 [2024-11-06 14:33:05.839674] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:28:38.393 [2024-11-06 14:33:05.925718] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:28:38.393 [2024-11-06 14:33:06.009456] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr attached 00:28:38.393 [2024-11-06 14:33:06.009477] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr connected 00:28:38.393 [2024-11-06 14:33:06.009497] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:28:38.651 [2024-11-06 14:33:06.095571] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 new subsystem mdns0_nvme0 00:28:38.651 [2024-11-06 14:33:06.149901] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr was created to 10.0.0.4:4420 00:28:38.651 [2024-11-06 14:33:06.150586] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Connecting qpair 0x61500002c680:1 started. 00:28:38.651 [2024-11-06 14:33:06.152097] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.4:8009] attach mdns0_nvme0 done 00:28:38.651 [2024-11-06 14:33:06.152119] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 found again 00:28:38.651 [2024-11-06 14:33:06.158461] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpair 0x61500002c680 was disconnected and freed. delete nvme_qpair. 00:28:39.218 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@160 -- # check_mdns_request_exists spdk1 10.0.0.4 8009 found 00:28:39.218 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:28:39.218 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.4 00:28:39.218 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:28:39.218 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local check_type=found 00:28:39.218 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:28:39.218 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:28:39.218 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:28:39.218 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:28:39.218 +;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:28:39.218 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:28:39.218 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:28:39.218 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:28:39.218 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:28:39.218 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:28:39.218 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:28:39.218 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:28:39.218 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:28:39.218 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\4* ]] 00:28:39.218 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:28:39.218 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:28:39.218 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:28:39.218 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:28:39.218 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\4* ]] 00:28:39.218 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:28:39.218 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:28:39.218 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:28:39.218 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:28:39.218 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\1\0\.\0\.\0\.\4* ]] 00:28:39.218 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\8\0\0\9* ]] 00:28:39.218 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ found == \f\o\u\n\d ]] 00:28:39.218 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@98 -- # return 0 00:28:39.218 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # get_mdns_discovery_svcs 00:28:39.218 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:28:39.218 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.218 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:28:39.218 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:39.218 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:28:39.218 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:28:39.218 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.218 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # [[ mdns == \m\d\n\s ]] 00:28:39.218 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@163 -- # get_discovery_ctrlrs 00:28:39.218 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:28:39.218 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:39.218 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:28:39.218 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.218 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:39.218 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:28:39.218 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.218 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@163 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:28:39.218 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:28:39.218 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:39.218 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:28:39.218 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.218 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:28:39.218 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:28:39.218 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:39.218 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.218 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:28:39.218 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:28:39.218 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:39.218 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.218 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:39.218 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:28:39.218 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:28:39.218 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:28:39.521 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.521 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:28:39.521 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:28:39.521 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:28:39.521 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:39.521 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:28:39.521 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.521 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:28:39.521 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:39.521 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.521 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # [[ 4420 == \4\4\2\0 ]] 00:28:39.521 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:28:39.521 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:28:39.521 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.521 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:39.521 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:39.521 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:28:39.521 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:28:39.521 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.521 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # [[ 4420 == \4\4\2\0 ]] 00:28:39.521 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@168 -- # get_notification_count 00:28:39.521 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:28:39.521 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:28:39.521 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.521 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:39.521 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.521 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=2 00:28:39.521 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=2 00:28:39.521 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@169 -- # [[ 2 == 2 ]] 00:28:39.521 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@172 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:28:39.522 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.522 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:39.522 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.522 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@173 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:28:39.522 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.522 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:39.522 [2024-11-06 14:33:06.980039] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x61500002c900:1 started. 00:28:39.522 [2024-11-06 14:33:06.982821] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Connecting qpair 0x61500002cb80:1 started. 00:28:39.522 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.522 14:33:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # sleep 1 00:28:39.522 [2024-11-06 14:33:06.989011] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x61500002c900 was disconnected and freed. delete nvme_qpair. 00:28:39.522 [2024-11-06 14:33:06.989679] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpair 0x61500002cb80 was disconnected and freed. delete nvme_qpair. 00:28:39.781 [2024-11-06 14:33:07.198772] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:28:39.781 [2024-11-06 14:33:07.198869] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:28:39.781 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:28:39.781 cookie is 0 00:28:39.781 is_local: 1 00:28:39.781 our_own: 0 00:28:39.781 wide_area: 0 00:28:39.781 multicast: 1 00:28:39.781 cached: 1 00:28:39.781 [2024-11-06 14:33:07.198929] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:28:39.781 [2024-11-06 14:33:07.298776] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:28:39.781 [2024-11-06 14:33:07.298884] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:28:39.781 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:28:39.781 cookie is 0 00:28:39.781 is_local: 1 00:28:39.781 our_own: 0 00:28:39.781 wide_area: 0 00:28:39.781 multicast: 1 00:28:39.781 cached: 1 00:28:39.781 [2024-11-06 14:33:07.298943] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.4 trid->trsvcid: 8009 00:28:40.716 14:33:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:28:40.717 14:33:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:40.717 14:33:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:28:40.717 14:33:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.717 14:33:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:40.717 14:33:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:28:40.717 14:33:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:28:40.717 14:33:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.717 14:33:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:28:40.717 14:33:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@177 -- # get_notification_count 00:28:40.717 14:33:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:40.717 14:33:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.717 14:33:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:40.717 14:33:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:28:40.717 14:33:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.717 14:33:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=2 00:28:40.717 14:33:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=4 00:28:40.717 14:33:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@178 -- # [[ 2 == 2 ]] 00:28:40.717 14:33:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@182 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:28:40.717 14:33:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.717 14:33:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:40.717 [2024-11-06 14:33:08.066028] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:28:40.717 [2024-11-06 14:33:08.066713] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:28:40.717 [2024-11-06 14:33:08.066741] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:28:40.717 [2024-11-06 14:33:08.066770] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:28:40.717 [2024-11-06 14:33:08.066788] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:28:40.717 14:33:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.717 14:33:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@183 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.4 -s 4421 00:28:40.717 14:33:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.717 14:33:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:40.717 [2024-11-06 14:33:08.073989] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4421 *** 00:28:40.717 [2024-11-06 14:33:08.074730] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:28:40.717 [2024-11-06 14:33:08.074889] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:28:40.717 14:33:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.717 14:33:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@184 -- # sleep 1 00:28:40.717 [2024-11-06 14:33:08.197812] bdev_nvme.c:7308:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for mdns1_nvme0 00:28:40.717 [2024-11-06 14:33:08.204826] bdev_nvme.c:7308:discovery_log_page_cb: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 new path for mdns0_nvme0 00:28:40.717 [2024-11-06 14:33:08.264230] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 2] ctrlr was created to 10.0.0.4:4421 00:28:40.717 [2024-11-06 14:33:08.264296] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.4:8009] attach mdns0_nvme0 done 00:28:40.717 [2024-11-06 14:33:08.264307] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 found again 00:28:40.717 [2024-11-06 14:33:08.264313] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:28:40.717 [2024-11-06 14:33:08.264350] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:28:40.717 [2024-11-06 14:33:08.264371] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:28:40.717 [2024-11-06 14:33:08.264402] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns1_nvme0 done 00:28:40.717 [2024-11-06 14:33:08.264409] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:28:40.717 [2024-11-06 14:33:08.264414] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:28:40.717 [2024-11-06 14:33:08.264425] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:28:40.717 [2024-11-06 14:33:08.309923] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:28:40.717 [2024-11-06 14:33:08.309938] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:28:40.717 [2024-11-06 14:33:08.310912] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 found again 00:28:40.717 [2024-11-06 14:33:08.310925] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:28:41.651 14:33:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # get_subsystem_names 00:28:41.651 14:33:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:41.651 14:33:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.651 14:33:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:41.651 14:33:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:28:41.651 14:33:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:28:41.651 14:33:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:28:41.651 14:33:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.651 14:33:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:28:41.651 14:33:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:28:41.651 14:33:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:41.651 14:33:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.651 14:33:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:41.651 14:33:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:28:41.651 14:33:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:28:41.651 14:33:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:28:41.652 14:33:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.652 14:33:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:28:41.652 14:33:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@188 -- # get_subsystem_paths mdns0_nvme0 00:28:41.652 14:33:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:41.652 14:33:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:28:41.652 14:33:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:28:41.652 14:33:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.652 14:33:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:41.652 14:33:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:28:41.652 14:33:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.652 14:33:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@188 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:28:41.652 14:33:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@189 -- # get_subsystem_paths mdns1_nvme0 00:28:41.652 14:33:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:41.652 14:33:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:28:41.652 14:33:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:28:41.652 14:33:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.652 14:33:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:28:41.652 14:33:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:41.652 14:33:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.652 14:33:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@189 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:28:41.652 14:33:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@190 -- # get_notification_count 00:28:41.652 14:33:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:28:41.652 14:33:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:28:41.652 14:33:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.652 14:33:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:41.652 14:33:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.652 14:33:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=0 00:28:41.652 14:33:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=4 00:28:41.652 14:33:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # [[ 0 == 0 ]] 00:28:41.652 14:33:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@195 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:28:41.652 14:33:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.652 14:33:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:41.652 [2024-11-06 14:33:09.259440] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:28:41.652 [2024-11-06 14:33:09.259541] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:28:41.652 [2024-11-06 14:33:09.259577] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:28:41.652 [2024-11-06 14:33:09.259594] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:28:41.652 [2024-11-06 14:33:09.259744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:41.652 [2024-11-06 14:33:09.259768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.652 [2024-11-06 14:33:09.259781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:41.652 [2024-11-06 14:33:09.259788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.652 [2024-11-06 14:33:09.259796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:41.652 [2024-11-06 14:33:09.259802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.652 [2024-11-06 14:33:09.259809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:41.652 [2024-11-06 14:33:09.259816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.652 [2024-11-06 14:33:09.259822] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b000 is same with the state(6) to be set 00:28:41.652 14:33:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.652 14:33:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@196 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.4 -s 4420 00:28:41.652 14:33:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.652 14:33:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:41.652 [2024-11-06 14:33:09.267434] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:28:41.652 [2024-11-06 14:33:09.267497] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:28:41.652 [2024-11-06 14:33:09.269716] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b000 (9): Bad file descriptor 00:28:41.652 14:33:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.652 14:33:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@197 -- # sleep 1 00:28:41.652 [2024-11-06 14:33:09.271877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:41.652 [2024-11-06 14:33:09.271903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.652 [2024-11-06 14:33:09.271912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:41.652 [2024-11-06 14:33:09.271920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.652 [2024-11-06 14:33:09.271928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:41.652 [2024-11-06 14:33:09.271935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.652 [2024-11-06 14:33:09.271942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:41.652 [2024-11-06 14:33:09.271956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.652 [2024-11-06 14:33:09.271963] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002c180 is same with the state(6) to be set 00:28:41.652 [2024-11-06 14:33:09.279729] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:41.652 [2024-11-06 14:33:09.279753] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:41.652 [2024-11-06 14:33:09.279759] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:41.652 [2024-11-06 14:33:09.279766] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:41.652 [2024-11-06 14:33:09.279791] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:41.652 [2024-11-06 14:33:09.279854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.652 [2024-11-06 14:33:09.279868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002b000 with addr=10.0.0.3, port=4420 00:28:41.652 [2024-11-06 14:33:09.279877] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b000 is same with the state(6) to be set 00:28:41.652 [2024-11-06 14:33:09.279890] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b000 (9): Bad file descriptor 00:28:41.652 [2024-11-06 14:33:09.279901] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:41.652 [2024-11-06 14:33:09.279908] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:41.652 [2024-11-06 14:33:09.279916] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:41.652 [2024-11-06 14:33:09.279923] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:41.652 [2024-11-06 14:33:09.279929] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:41.652 [2024-11-06 14:33:09.279934] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:41.652 [2024-11-06 14:33:09.281853] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002c180 (9): Bad file descriptor 00:28:41.913 [2024-11-06 14:33:09.289800] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:41.913 [2024-11-06 14:33:09.289821] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:41.913 [2024-11-06 14:33:09.289825] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:41.913 [2024-11-06 14:33:09.289830] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:41.913 [2024-11-06 14:33:09.289850] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:41.913 [2024-11-06 14:33:09.289888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.913 [2024-11-06 14:33:09.289899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002b000 with addr=10.0.0.3, port=4420 00:28:41.913 [2024-11-06 14:33:09.289906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b000 is same with the state(6) to be set 00:28:41.913 [2024-11-06 14:33:09.289917] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b000 (9): Bad file descriptor 00:28:41.913 [2024-11-06 14:33:09.289927] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:41.913 [2024-11-06 14:33:09.289938] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:41.913 [2024-11-06 14:33:09.289944] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:41.913 [2024-11-06 14:33:09.289959] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:41.913 [2024-11-06 14:33:09.289964] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:41.914 [2024-11-06 14:33:09.289968] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:41.914 [2024-11-06 14:33:09.291857] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:28:41.914 [2024-11-06 14:33:09.291877] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:28:41.914 [2024-11-06 14:33:09.291882] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:28:41.914 [2024-11-06 14:33:09.291886] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:28:41.914 [2024-11-06 14:33:09.291906] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:28:41.914 [2024-11-06 14:33:09.291942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.914 [2024-11-06 14:33:09.291961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002c180 with addr=10.0.0.4, port=4420 00:28:41.914 [2024-11-06 14:33:09.291968] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002c180 is same with the state(6) to be set 00:28:41.914 [2024-11-06 14:33:09.291979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002c180 (9): Bad file descriptor 00:28:41.914 [2024-11-06 14:33:09.291989] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:28:41.914 [2024-11-06 14:33:09.291994] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:28:41.914 [2024-11-06 14:33:09.292000] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:28:41.914 [2024-11-06 14:33:09.292006] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:28:41.914 [2024-11-06 14:33:09.292011] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:28:41.914 [2024-11-06 14:33:09.292015] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:28:41.914 [2024-11-06 14:33:09.299858] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:41.914 [2024-11-06 14:33:09.299880] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:41.914 [2024-11-06 14:33:09.299884] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:41.914 [2024-11-06 14:33:09.299889] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:41.914 [2024-11-06 14:33:09.299912] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:41.914 [2024-11-06 14:33:09.299947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.914 [2024-11-06 14:33:09.299967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002b000 with addr=10.0.0.3, port=4420 00:28:41.914 [2024-11-06 14:33:09.299974] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b000 is same with the state(6) to be set 00:28:41.914 [2024-11-06 14:33:09.299985] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b000 (9): Bad file descriptor 00:28:41.914 [2024-11-06 14:33:09.299994] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:41.914 [2024-11-06 14:33:09.300000] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:41.914 [2024-11-06 14:33:09.300006] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:41.914 [2024-11-06 14:33:09.300011] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:41.914 [2024-11-06 14:33:09.300015] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:41.914 [2024-11-06 14:33:09.300019] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:41.914 [2024-11-06 14:33:09.301915] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:28:41.914 [2024-11-06 14:33:09.301935] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:28:41.914 [2024-11-06 14:33:09.301940] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:28:41.914 [2024-11-06 14:33:09.301944] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:28:41.914 [2024-11-06 14:33:09.301972] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:28:41.914 [2024-11-06 14:33:09.302006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.914 [2024-11-06 14:33:09.302017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002c180 with addr=10.0.0.4, port=4420 00:28:41.914 [2024-11-06 14:33:09.302023] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002c180 is same with the state(6) to be set 00:28:41.914 [2024-11-06 14:33:09.302034] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002c180 (9): Bad file descriptor 00:28:41.914 [2024-11-06 14:33:09.302043] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:28:41.914 [2024-11-06 14:33:09.302049] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:28:41.914 [2024-11-06 14:33:09.302055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:28:41.914 [2024-11-06 14:33:09.302060] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:28:41.914 [2024-11-06 14:33:09.302064] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:28:41.914 [2024-11-06 14:33:09.302068] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:28:41.914 [2024-11-06 14:33:09.309922] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:41.914 [2024-11-06 14:33:09.309944] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:41.914 [2024-11-06 14:33:09.309957] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:41.914 [2024-11-06 14:33:09.309961] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:41.914 [2024-11-06 14:33:09.309981] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:41.914 [2024-11-06 14:33:09.310015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.914 [2024-11-06 14:33:09.310026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002b000 with addr=10.0.0.3, port=4420 00:28:41.914 [2024-11-06 14:33:09.310032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b000 is same with the state(6) to be set 00:28:41.914 [2024-11-06 14:33:09.310042] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b000 (9): Bad file descriptor 00:28:41.914 [2024-11-06 14:33:09.310051] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:41.914 [2024-11-06 14:33:09.310057] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:41.914 [2024-11-06 14:33:09.310064] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:41.914 [2024-11-06 14:33:09.310069] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:41.914 [2024-11-06 14:33:09.310073] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:41.914 [2024-11-06 14:33:09.310077] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:41.914 [2024-11-06 14:33:09.311983] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:28:41.914 [2024-11-06 14:33:09.312004] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:28:41.914 [2024-11-06 14:33:09.312009] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:28:41.914 [2024-11-06 14:33:09.312013] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:28:41.914 [2024-11-06 14:33:09.312037] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:28:41.914 [2024-11-06 14:33:09.312076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.914 [2024-11-06 14:33:09.312087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002c180 with addr=10.0.0.4, port=4420 00:28:41.914 [2024-11-06 14:33:09.312093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002c180 is same with the state(6) to be set 00:28:41.914 [2024-11-06 14:33:09.312104] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002c180 (9): Bad file descriptor 00:28:41.914 [2024-11-06 14:33:09.312113] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:28:41.914 [2024-11-06 14:33:09.312118] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:28:41.914 [2024-11-06 14:33:09.312125] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:28:41.914 [2024-11-06 14:33:09.312130] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:28:41.914 [2024-11-06 14:33:09.312135] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:28:41.914 [2024-11-06 14:33:09.312138] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:28:41.914 [2024-11-06 14:33:09.319992] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:41.914 [2024-11-06 14:33:09.320014] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:41.915 [2024-11-06 14:33:09.320019] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:41.915 [2024-11-06 14:33:09.320023] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:41.915 [2024-11-06 14:33:09.320042] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:41.915 [2024-11-06 14:33:09.320077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.915 [2024-11-06 14:33:09.320088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002b000 with addr=10.0.0.3, port=4420 00:28:41.915 [2024-11-06 14:33:09.320094] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b000 is same with the state(6) to be set 00:28:41.915 [2024-11-06 14:33:09.320105] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b000 (9): Bad file descriptor 00:28:41.915 [2024-11-06 14:33:09.320114] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:41.915 [2024-11-06 14:33:09.320123] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:41.915 [2024-11-06 14:33:09.320130] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:41.915 [2024-11-06 14:33:09.320135] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:41.915 [2024-11-06 14:33:09.320139] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:41.915 [2024-11-06 14:33:09.320143] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:41.915 [2024-11-06 14:33:09.322045] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:28:41.915 [2024-11-06 14:33:09.322066] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:28:41.915 [2024-11-06 14:33:09.322071] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:28:41.915 [2024-11-06 14:33:09.322075] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:28:41.915 [2024-11-06 14:33:09.322094] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:28:41.915 [2024-11-06 14:33:09.322145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.915 [2024-11-06 14:33:09.322156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002c180 with addr=10.0.0.4, port=4420 00:28:41.915 [2024-11-06 14:33:09.322163] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002c180 is same with the state(6) to be set 00:28:41.915 [2024-11-06 14:33:09.322174] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002c180 (9): Bad file descriptor 00:28:41.915 [2024-11-06 14:33:09.322183] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:28:41.915 [2024-11-06 14:33:09.322188] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:28:41.915 [2024-11-06 14:33:09.322194] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:28:41.915 [2024-11-06 14:33:09.322200] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:28:41.915 [2024-11-06 14:33:09.322204] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:28:41.915 [2024-11-06 14:33:09.322208] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:28:41.915 [2024-11-06 14:33:09.330052] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:41.915 [2024-11-06 14:33:09.330073] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:41.915 [2024-11-06 14:33:09.330078] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:41.915 [2024-11-06 14:33:09.330082] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:41.915 [2024-11-06 14:33:09.330101] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:41.915 [2024-11-06 14:33:09.330138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.915 [2024-11-06 14:33:09.330149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002b000 with addr=10.0.0.3, port=4420 00:28:41.915 [2024-11-06 14:33:09.330156] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b000 is same with the state(6) to be set 00:28:41.915 [2024-11-06 14:33:09.330166] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b000 (9): Bad file descriptor 00:28:41.915 [2024-11-06 14:33:09.330176] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:41.915 [2024-11-06 14:33:09.330181] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:41.915 [2024-11-06 14:33:09.330187] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:41.915 [2024-11-06 14:33:09.330193] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:41.915 [2024-11-06 14:33:09.330197] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:41.915 [2024-11-06 14:33:09.330201] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:41.915 [2024-11-06 14:33:09.332103] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:28:41.915 [2024-11-06 14:33:09.332123] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:28:41.915 [2024-11-06 14:33:09.332127] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:28:41.915 [2024-11-06 14:33:09.332131] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:28:41.915 [2024-11-06 14:33:09.332150] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:28:41.915 [2024-11-06 14:33:09.332183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.915 [2024-11-06 14:33:09.332193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002c180 with addr=10.0.0.4, port=4420 00:28:41.915 [2024-11-06 14:33:09.332200] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002c180 is same with the state(6) to be set 00:28:41.915 [2024-11-06 14:33:09.332210] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002c180 (9): Bad file descriptor 00:28:41.915 [2024-11-06 14:33:09.332219] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:28:41.915 [2024-11-06 14:33:09.332225] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:28:41.915 [2024-11-06 14:33:09.332231] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:28:41.915 [2024-11-06 14:33:09.332236] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:28:41.915 [2024-11-06 14:33:09.332240] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:28:41.915 [2024-11-06 14:33:09.332244] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:28:41.915 [2024-11-06 14:33:09.340110] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:41.915 [2024-11-06 14:33:09.340131] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:41.915 [2024-11-06 14:33:09.340136] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:41.915 [2024-11-06 14:33:09.340140] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:41.915 [2024-11-06 14:33:09.340162] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:41.915 [2024-11-06 14:33:09.340196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.915 [2024-11-06 14:33:09.340207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002b000 with addr=10.0.0.3, port=4420 00:28:41.915 [2024-11-06 14:33:09.340213] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b000 is same with the state(6) to be set 00:28:41.915 [2024-11-06 14:33:09.340223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b000 (9): Bad file descriptor 00:28:41.915 [2024-11-06 14:33:09.340232] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:41.915 [2024-11-06 14:33:09.340239] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:41.915 [2024-11-06 14:33:09.340244] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:41.915 [2024-11-06 14:33:09.340249] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:41.915 [2024-11-06 14:33:09.340254] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:41.915 [2024-11-06 14:33:09.340258] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:41.915 [2024-11-06 14:33:09.342159] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:28:41.915 [2024-11-06 14:33:09.342179] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:28:41.915 [2024-11-06 14:33:09.342183] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:28:41.915 [2024-11-06 14:33:09.342187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:28:41.915 [2024-11-06 14:33:09.342206] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:28:41.916 [2024-11-06 14:33:09.342239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.916 [2024-11-06 14:33:09.342250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002c180 with addr=10.0.0.4, port=4420 00:28:41.916 [2024-11-06 14:33:09.342256] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002c180 is same with the state(6) to be set 00:28:41.916 [2024-11-06 14:33:09.342266] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002c180 (9): Bad file descriptor 00:28:41.916 [2024-11-06 14:33:09.342276] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:28:41.916 [2024-11-06 14:33:09.342281] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:28:41.916 [2024-11-06 14:33:09.342287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:28:41.916 [2024-11-06 14:33:09.342292] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:28:41.916 [2024-11-06 14:33:09.342296] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:28:41.916 [2024-11-06 14:33:09.342300] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:28:41.916 [2024-11-06 14:33:09.350173] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:41.916 [2024-11-06 14:33:09.350195] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:41.916 [2024-11-06 14:33:09.350199] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:41.916 [2024-11-06 14:33:09.350203] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:41.916 [2024-11-06 14:33:09.350222] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:41.916 [2024-11-06 14:33:09.350255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.916 [2024-11-06 14:33:09.350266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002b000 with addr=10.0.0.3, port=4420 00:28:41.916 [2024-11-06 14:33:09.350273] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b000 is same with the state(6) to be set 00:28:41.916 [2024-11-06 14:33:09.350283] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b000 (9): Bad file descriptor 00:28:41.916 [2024-11-06 14:33:09.350292] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:41.916 [2024-11-06 14:33:09.350298] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:41.916 [2024-11-06 14:33:09.350303] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:41.916 [2024-11-06 14:33:09.350309] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:41.916 [2024-11-06 14:33:09.350313] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:41.916 [2024-11-06 14:33:09.350317] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:41.916 [2024-11-06 14:33:09.352215] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:28:41.916 [2024-11-06 14:33:09.352235] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:28:41.916 [2024-11-06 14:33:09.352240] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:28:41.916 [2024-11-06 14:33:09.352244] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:28:41.916 [2024-11-06 14:33:09.352267] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:28:41.916 [2024-11-06 14:33:09.352308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.916 [2024-11-06 14:33:09.352319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002c180 with addr=10.0.0.4, port=4420 00:28:41.916 [2024-11-06 14:33:09.352325] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002c180 is same with the state(6) to be set 00:28:41.916 [2024-11-06 14:33:09.352336] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002c180 (9): Bad file descriptor 00:28:41.916 [2024-11-06 14:33:09.352345] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:28:41.916 [2024-11-06 14:33:09.352351] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:28:41.916 [2024-11-06 14:33:09.352357] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:28:41.916 [2024-11-06 14:33:09.352362] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:28:41.916 [2024-11-06 14:33:09.352371] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:28:41.916 [2024-11-06 14:33:09.352374] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:28:41.916 [2024-11-06 14:33:09.360233] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:41.916 [2024-11-06 14:33:09.360256] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:41.916 [2024-11-06 14:33:09.360260] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:41.916 [2024-11-06 14:33:09.360264] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:41.916 [2024-11-06 14:33:09.360292] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:41.916 [2024-11-06 14:33:09.360327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.916 [2024-11-06 14:33:09.360338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002b000 with addr=10.0.0.3, port=4420 00:28:41.916 [2024-11-06 14:33:09.360345] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b000 is same with the state(6) to be set 00:28:41.916 [2024-11-06 14:33:09.360355] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b000 (9): Bad file descriptor 00:28:41.916 [2024-11-06 14:33:09.360365] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:41.916 [2024-11-06 14:33:09.360371] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:41.916 [2024-11-06 14:33:09.360377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:41.916 [2024-11-06 14:33:09.360382] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:41.916 [2024-11-06 14:33:09.360386] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:41.916 [2024-11-06 14:33:09.360390] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:41.916 [2024-11-06 14:33:09.362276] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:28:41.916 [2024-11-06 14:33:09.362296] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:28:41.916 [2024-11-06 14:33:09.362301] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:28:41.916 [2024-11-06 14:33:09.362305] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:28:41.916 [2024-11-06 14:33:09.362324] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:28:41.916 [2024-11-06 14:33:09.362361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.916 [2024-11-06 14:33:09.362372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002c180 with addr=10.0.0.4, port=4420 00:28:41.916 [2024-11-06 14:33:09.362379] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002c180 is same with the state(6) to be set 00:28:41.916 [2024-11-06 14:33:09.362389] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002c180 (9): Bad file descriptor 00:28:41.916 [2024-11-06 14:33:09.362399] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:28:41.916 [2024-11-06 14:33:09.362404] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:28:41.916 [2024-11-06 14:33:09.362411] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:28:41.916 [2024-11-06 14:33:09.362416] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:28:41.916 [2024-11-06 14:33:09.362420] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:28:41.916 [2024-11-06 14:33:09.362424] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:28:41.916 [2024-11-06 14:33:09.370303] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:41.916 [2024-11-06 14:33:09.370324] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:41.916 [2024-11-06 14:33:09.370328] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:41.916 [2024-11-06 14:33:09.370332] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:41.916 [2024-11-06 14:33:09.370351] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:41.916 [2024-11-06 14:33:09.370385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.916 [2024-11-06 14:33:09.370395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002b000 with addr=10.0.0.3, port=4420 00:28:41.916 [2024-11-06 14:33:09.370402] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b000 is same with the state(6) to be set 00:28:41.916 [2024-11-06 14:33:09.370413] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b000 (9): Bad file descriptor 00:28:41.916 [2024-11-06 14:33:09.370422] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:41.916 [2024-11-06 14:33:09.370428] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:41.916 [2024-11-06 14:33:09.370434] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:41.916 [2024-11-06 14:33:09.370439] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:41.916 [2024-11-06 14:33:09.370443] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:41.916 [2024-11-06 14:33:09.370447] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:41.916 [2024-11-06 14:33:09.372332] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:28:41.916 [2024-11-06 14:33:09.372352] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:28:41.916 [2024-11-06 14:33:09.372357] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:28:41.917 [2024-11-06 14:33:09.372361] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:28:41.917 [2024-11-06 14:33:09.372379] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:28:41.917 [2024-11-06 14:33:09.372411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.917 [2024-11-06 14:33:09.372422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002c180 with addr=10.0.0.4, port=4420 00:28:41.917 [2024-11-06 14:33:09.372429] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002c180 is same with the state(6) to be set 00:28:41.917 [2024-11-06 14:33:09.372439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002c180 (9): Bad file descriptor 00:28:41.917 [2024-11-06 14:33:09.372448] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:28:41.917 [2024-11-06 14:33:09.372453] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:28:41.917 [2024-11-06 14:33:09.372459] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:28:41.917 [2024-11-06 14:33:09.372464] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:28:41.917 [2024-11-06 14:33:09.372469] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:28:41.917 [2024-11-06 14:33:09.372472] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:28:41.917 [2024-11-06 14:33:09.380361] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:41.917 [2024-11-06 14:33:09.380382] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:41.917 [2024-11-06 14:33:09.380386] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:41.917 [2024-11-06 14:33:09.380390] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:41.917 [2024-11-06 14:33:09.380413] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:41.917 [2024-11-06 14:33:09.380447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.917 [2024-11-06 14:33:09.380458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002b000 with addr=10.0.0.3, port=4420 00:28:41.917 [2024-11-06 14:33:09.380464] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b000 is same with the state(6) to be set 00:28:41.917 [2024-11-06 14:33:09.380474] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b000 (9): Bad file descriptor 00:28:41.917 [2024-11-06 14:33:09.380484] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:41.917 [2024-11-06 14:33:09.380489] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:41.917 [2024-11-06 14:33:09.380495] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:41.917 [2024-11-06 14:33:09.380500] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:41.917 [2024-11-06 14:33:09.380505] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:41.917 [2024-11-06 14:33:09.380509] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:41.917 [2024-11-06 14:33:09.382388] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:28:41.917 [2024-11-06 14:33:09.382408] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:28:41.917 [2024-11-06 14:33:09.382413] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:28:41.917 [2024-11-06 14:33:09.382417] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:28:41.917 [2024-11-06 14:33:09.382437] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:28:41.917 [2024-11-06 14:33:09.382470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.917 [2024-11-06 14:33:09.382480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002c180 with addr=10.0.0.4, port=4420 00:28:41.917 [2024-11-06 14:33:09.382487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002c180 is same with the state(6) to be set 00:28:41.917 [2024-11-06 14:33:09.382497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002c180 (9): Bad file descriptor 00:28:41.917 [2024-11-06 14:33:09.382506] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:28:41.917 [2024-11-06 14:33:09.382512] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:28:41.917 [2024-11-06 14:33:09.382518] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:28:41.917 [2024-11-06 14:33:09.382523] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:28:41.917 [2024-11-06 14:33:09.382527] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:28:41.917 [2024-11-06 14:33:09.382531] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:28:41.917 [2024-11-06 14:33:09.390423] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:41.917 [2024-11-06 14:33:09.390444] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:41.917 [2024-11-06 14:33:09.390449] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:41.917 [2024-11-06 14:33:09.390453] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:41.917 [2024-11-06 14:33:09.390472] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:41.917 [2024-11-06 14:33:09.390505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.917 [2024-11-06 14:33:09.390516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002b000 with addr=10.0.0.3, port=4420 00:28:41.917 [2024-11-06 14:33:09.390522] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b000 is same with the state(6) to be set 00:28:41.917 [2024-11-06 14:33:09.390532] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b000 (9): Bad file descriptor 00:28:41.917 [2024-11-06 14:33:09.390542] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:41.917 [2024-11-06 14:33:09.390547] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:41.917 [2024-11-06 14:33:09.390553] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:41.917 [2024-11-06 14:33:09.390558] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:41.917 [2024-11-06 14:33:09.390562] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:41.917 [2024-11-06 14:33:09.390566] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:41.917 [2024-11-06 14:33:09.392445] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:28:41.917 [2024-11-06 14:33:09.392465] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:28:41.917 [2024-11-06 14:33:09.392469] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:28:41.917 [2024-11-06 14:33:09.392473] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:28:41.917 [2024-11-06 14:33:09.392497] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:28:41.917 [2024-11-06 14:33:09.392530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.917 [2024-11-06 14:33:09.392541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002c180 with addr=10.0.0.4, port=4420 00:28:41.917 [2024-11-06 14:33:09.392548] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002c180 is same with the state(6) to be set 00:28:41.917 [2024-11-06 14:33:09.392558] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002c180 (9): Bad file descriptor 00:28:41.917 [2024-11-06 14:33:09.392567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:28:41.917 [2024-11-06 14:33:09.392573] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:28:41.917 [2024-11-06 14:33:09.392579] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:28:41.917 [2024-11-06 14:33:09.392584] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:28:41.917 [2024-11-06 14:33:09.392588] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:28:41.917 [2024-11-06 14:33:09.392592] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:28:41.917 [2024-11-06 14:33:09.397637] bdev_nvme.c:7171:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 not found 00:28:41.917 [2024-11-06 14:33:09.397668] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:28:41.917 [2024-11-06 14:33:09.397689] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:28:41.917 [2024-11-06 14:33:09.398634] bdev_nvme.c:7171:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:28:41.917 [2024-11-06 14:33:09.398659] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:28:41.917 [2024-11-06 14:33:09.398678] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:28:41.917 [2024-11-06 14:33:09.483727] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:28:41.917 [2024-11-06 14:33:09.484718] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:28:42.852 14:33:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # get_subsystem_names 00:28:42.852 14:33:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:42.852 14:33:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:28:42.852 14:33:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:28:42.852 14:33:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.852 14:33:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:42.852 14:33:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:28:42.852 14:33:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.852 14:33:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:28:42.852 14:33:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@200 -- # get_bdev_list 00:28:42.852 14:33:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:28:42.852 14:33:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:42.852 14:33:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:28:42.852 14:33:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.852 14:33:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:42.852 14:33:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:28:42.852 14:33:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.852 14:33:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@200 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:28:42.852 14:33:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@201 -- # get_subsystem_paths mdns0_nvme0 00:28:42.852 14:33:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:28:42.852 14:33:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.852 14:33:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:42.852 14:33:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:42.852 14:33:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:28:42.852 14:33:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:28:42.852 14:33:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.852 14:33:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@201 -- # [[ 4421 == \4\4\2\1 ]] 00:28:42.852 14:33:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # get_subsystem_paths mdns1_nvme0 00:28:42.852 14:33:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:28:42.852 14:33:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:42.852 14:33:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.852 14:33:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:42.852 14:33:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:28:42.852 14:33:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:28:42.852 14:33:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.852 14:33:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # [[ 4421 == \4\4\2\1 ]] 00:28:42.852 14:33:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@203 -- # get_notification_count 00:28:42.852 14:33:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:28:42.852 14:33:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:28:42.852 14:33:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.852 14:33:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:42.852 14:33:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.852 14:33:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=0 00:28:42.852 14:33:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=4 00:28:42.852 14:33:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@204 -- # [[ 0 == 0 ]] 00:28:42.852 14:33:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@206 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:28:42.852 14:33:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.852 14:33:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:42.852 14:33:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.852 14:33:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@207 -- # sleep 1 00:28:43.111 [2024-11-06 14:33:10.499619] bdev_mdns_client.c: 425:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:28:44.045 14:33:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@209 -- # get_mdns_discovery_svcs 00:28:44.045 14:33:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:28:44.045 14:33:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:28:44.045 14:33:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:28:44.045 14:33:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.045 14:33:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:44.045 14:33:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:28:44.045 14:33:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.045 14:33:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@209 -- # [[ '' == '' ]] 00:28:44.045 14:33:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@210 -- # get_subsystem_names 00:28:44.045 14:33:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:44.045 14:33:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:28:44.045 14:33:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:28:44.045 14:33:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.045 14:33:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:28:44.045 14:33:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:44.045 14:33:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.045 14:33:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@210 -- # [[ '' == '' ]] 00:28:44.045 14:33:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@211 -- # get_bdev_list 00:28:44.045 14:33:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:28:44.045 14:33:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:28:44.045 14:33:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:44.045 14:33:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.045 14:33:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:28:44.045 14:33:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:44.045 14:33:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.045 14:33:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@211 -- # [[ '' == '' ]] 00:28:44.045 14:33:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@212 -- # get_notification_count 00:28:44.045 14:33:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:28:44.045 14:33:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:28:44.045 14:33:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.045 14:33:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:44.045 14:33:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.045 14:33:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=4 00:28:44.045 14:33:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=8 00:28:44.045 14:33:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@213 -- # [[ 4 == 4 ]] 00:28:44.045 14:33:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@216 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:28:44.045 14:33:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.045 14:33:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:44.045 14:33:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.045 14:33:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@217 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:28:44.045 14:33:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@650 -- # local es=0 00:28:44.045 14:33:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:28:44.045 14:33:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:44.045 14:33:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:44.045 14:33:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:44.045 14:33:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:44.045 14:33:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:28:44.045 14:33:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.045 14:33:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:44.045 [2024-11-06 14:33:11.599838] bdev_mdns_client.c: 471:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:28:44.046 2024/11/06 14:33:11 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:28:44.046 request: 00:28:44.046 { 00:28:44.046 "method": "bdev_nvme_start_mdns_discovery", 00:28:44.046 "params": { 00:28:44.046 "name": "mdns", 00:28:44.046 "svcname": "_nvme-disc._http", 00:28:44.046 "hostnqn": "nqn.2021-12.io.spdk:test" 00:28:44.046 } 00:28:44.046 } 00:28:44.046 Got JSON-RPC error response 00:28:44.046 GoRPCClient: error on JSON-RPC call 00:28:44.046 14:33:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:44.046 14:33:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@653 -- # es=1 00:28:44.046 14:33:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:44.046 14:33:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:44.046 14:33:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:44.046 14:33:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@218 -- # sleep 5 00:28:44.612 [2024-11-06 14:33:12.188377] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:28:44.873 [2024-11-06 14:33:12.288373] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:28:44.873 [2024-11-06 14:33:12.388381] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:28:44.873 [2024-11-06 14:33:12.388401] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:28:44.873 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:28:44.873 cookie is 0 00:28:44.873 is_local: 1 00:28:44.873 our_own: 0 00:28:44.873 wide_area: 0 00:28:44.873 multicast: 1 00:28:44.873 cached: 1 00:28:44.873 [2024-11-06 14:33:12.488382] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:28:44.873 [2024-11-06 14:33:12.488482] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:28:44.873 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:28:44.873 cookie is 0 00:28:44.873 is_local: 1 00:28:44.873 our_own: 0 00:28:44.873 wide_area: 0 00:28:44.873 multicast: 1 00:28:44.873 cached: 1 00:28:44.873 [2024-11-06 14:33:12.488499] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.4 trid->trsvcid: 8009 00:28:45.136 [2024-11-06 14:33:12.588387] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:28:45.136 [2024-11-06 14:33:12.588479] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:28:45.136 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:28:45.136 cookie is 0 00:28:45.136 is_local: 1 00:28:45.136 our_own: 0 00:28:45.136 wide_area: 0 00:28:45.136 multicast: 1 00:28:45.136 cached: 1 00:28:45.136 [2024-11-06 14:33:12.688386] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:28:45.136 [2024-11-06 14:33:12.688488] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:28:45.136 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:28:45.136 cookie is 0 00:28:45.136 is_local: 1 00:28:45.136 our_own: 0 00:28:45.136 wide_area: 0 00:28:45.136 multicast: 1 00:28:45.136 cached: 1 00:28:45.136 [2024-11-06 14:33:12.688503] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:28:46.068 [2024-11-06 14:33:13.400697] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr attached 00:28:46.068 [2024-11-06 14:33:13.400809] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr connected 00:28:46.068 [2024-11-06 14:33:13.400837] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:28:46.068 [2024-11-06 14:33:13.486815] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 new subsystem mdns0_nvme0 00:28:46.068 [2024-11-06 14:33:13.545158] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 3] ctrlr was created to 10.0.0.4:4421 00:28:46.068 [2024-11-06 14:33:13.545818] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode20, 3] Connecting qpair 0x61500002e200:1 started. 00:28:46.068 [2024-11-06 14:33:13.547561] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.4:8009] attach mdns0_nvme0 done 00:28:46.068 [2024-11-06 14:33:13.547669] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:28:46.068 [2024-11-06 14:33:13.549837] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode20, 3] qpair 0x61500002e200 was disconnected and freed. delete nvme_qpair. 00:28:46.068 [2024-11-06 14:33:13.600482] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:28:46.068 [2024-11-06 14:33:13.600499] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:28:46.068 [2024-11-06 14:33:13.600523] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:28:46.068 [2024-11-06 14:33:13.686602] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem mdns1_nvme0 00:28:46.326 [2024-11-06 14:33:13.745027] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:28:46.326 [2024-11-06 14:33:13.745708] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x61500002ee80:1 started. 00:28:46.326 [2024-11-06 14:33:13.747432] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns1_nvme0 done 00:28:46.326 [2024-11-06 14:33:13.747452] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:28:46.326 [2024-11-06 14:33:13.749803] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x61500002ee80 was disconnected and freed. delete nvme_qpair. 00:28:49.632 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@220 -- # get_mdns_discovery_svcs 00:28:49.632 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:28:49.632 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:28:49.632 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:28:49.632 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:28:49.632 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:49.632 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:49.632 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:49.632 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@220 -- # [[ mdns == \m\d\n\s ]] 00:28:49.632 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@221 -- # get_discovery_ctrlrs 00:28:49.632 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:28:49.632 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:49.632 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:28:49.632 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:49.632 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:49.632 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:28:49.632 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:49.632 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@221 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:28:49.632 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@222 -- # get_bdev_list 00:28:49.632 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:28:49.632 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:49.632 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:28:49.632 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:28:49.632 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:49.632 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:49.632 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:49.632 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@222 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:28:49.632 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@225 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:28:49.632 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@650 -- # local es=0 00:28:49.632 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:28:49.632 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:49.632 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:49.632 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:49.632 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:49.632 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:28:49.632 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:49.632 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:49.632 [2024-11-06 14:33:16.717399] bdev_mdns_client.c: 476:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:28:49.632 2024/11/06 14:33:16 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:28:49.632 request: 00:28:49.632 { 00:28:49.632 "method": "bdev_nvme_start_mdns_discovery", 00:28:49.632 "params": { 00:28:49.632 "name": "cdc", 00:28:49.632 "svcname": "_nvme-disc._tcp", 00:28:49.632 "hostnqn": "nqn.2021-12.io.spdk:test" 00:28:49.632 } 00:28:49.632 } 00:28:49.632 Got JSON-RPC error response 00:28:49.632 GoRPCClient: error on JSON-RPC call 00:28:49.632 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:49.632 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@653 -- # es=1 00:28:49.632 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:49.633 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:49.633 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:49.633 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@226 -- # get_discovery_ctrlrs 00:28:49.633 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:49.633 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:28:49.633 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:28:49.633 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:28:49.633 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:49.633 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:49.633 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:49.633 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@226 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:28:49.633 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@227 -- # get_bdev_list 00:28:49.633 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:28:49.633 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:49.633 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:28:49.633 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:49.633 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:28:49.633 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:49.633 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:49.633 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@227 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:28:49.633 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@228 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:28:49.633 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:49.633 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:49.633 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:49.633 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@231 -- # check_mdns_request_exists spdk1 10.0.0.3 8009 found 00:28:49.633 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:28:49.633 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.3 00:28:49.633 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:28:49.633 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local check_type=found 00:28:49.633 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:28:49.633 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:28:49.633 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:28:49.633 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:28:49.633 +;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:28:49.633 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:28:49.633 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:28:49.633 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:28:49.633 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:28:49.633 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:28:49.633 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:28:49.633 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:28:49.633 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:28:49.633 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\3* ]] 00:28:49.633 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:28:49.633 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:28:49.633 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:28:49.633 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:28:49.633 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\3* ]] 00:28:49.633 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:28:49.633 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:28:49.633 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:28:49.633 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:28:49.633 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\1\0\.\0\.\0\.\3* ]] 00:28:49.633 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:28:49.633 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:28:49.633 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:28:49.633 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:28:49.633 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\1\0\.\0\.\0\.\3* ]] 00:28:49.633 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\8\0\0\9* ]] 00:28:49.633 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ found == \f\o\u\n\d ]] 00:28:49.633 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@98 -- # return 0 00:28:49.633 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@232 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:28:49.633 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:49.633 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:49.633 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:49.633 14:33:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@234 -- # sleep 1 00:28:49.633 [2024-11-06 14:33:16.888426] bdev_mdns_client.c: 425:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:28:50.568 14:33:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@236 -- # check_mdns_request_exists spdk1 10.0.0.3 8009 'not found' 00:28:50.568 14:33:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:28:50.568 14:33:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.3 00:28:50.568 14:33:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:28:50.568 14:33:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local 'check_type=not found' 00:28:50.568 14:33:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:28:50.568 14:33:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:28:50.568 14:33:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:28:50.568 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:28:50.568 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:28:50.568 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:28:50.568 14:33:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:28:50.568 14:33:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:28:50.568 14:33:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:28:50.568 14:33:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:28:50.568 14:33:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:28:50.568 14:33:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:28:50.568 14:33:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:28:50.568 14:33:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:28:50.568 14:33:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:28:50.568 14:33:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # [[ not found == \f\o\u\n\d ]] 00:28:50.568 14:33:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@108 -- # return 0 00:28:50.568 14:33:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@238 -- # rpc_cmd nvmf_stop_mdns_prr 00:28:50.568 14:33:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:50.568 14:33:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:50.568 14:33:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:50.568 14:33:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@240 -- # trap - SIGINT SIGTERM EXIT 00:28:50.568 14:33:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@242 -- # kill 104078 00:28:50.568 14:33:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@245 -- # wait 104078 00:28:50.826 14:33:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@246 -- # kill 104108 00:28:50.826 Got SIGTERM, quitting. 00:28:50.826 14:33:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@247 -- # nvmftestfini 00:28:50.826 14:33:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:50.826 14:33:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@121 -- # sync 00:28:50.826 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.4. 00:28:50.826 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.3. 00:28:50.826 avahi-daemon 0.8 exiting. 00:28:50.826 14:33:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:50.826 14:33:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@124 -- # set +e 00:28:50.826 14:33:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:50.826 14:33:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:50.826 rmmod nvme_tcp 00:28:50.826 rmmod nvme_fabrics 00:28:50.826 rmmod nvme_keyring 00:28:51.084 14:33:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:51.084 14:33:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@128 -- # set -e 00:28:51.084 14:33:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@129 -- # return 0 00:28:51.084 14:33:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@517 -- # '[' -n 104028 ']' 00:28:51.084 14:33:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@518 -- # killprocess 104028 00:28:51.084 14:33:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@952 -- # '[' -z 104028 ']' 00:28:51.084 14:33:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@956 -- # kill -0 104028 00:28:51.084 14:33:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@957 -- # uname 00:28:51.084 14:33:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:51.084 14:33:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 104028 00:28:51.084 killing process with pid 104028 00:28:51.084 14:33:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:51.084 14:33:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:51.084 14:33:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 104028' 00:28:51.084 14:33:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@971 -- # kill 104028 00:28:51.084 14:33:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@976 -- # wait 104028 00:28:51.654 14:33:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:51.654 14:33:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:51.654 14:33:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:51.654 14:33:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@297 -- # iptr 00:28:51.654 14:33:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@791 -- # iptables-save 00:28:51.654 14:33:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:28:51.654 14:33:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:51.654 14:33:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:51.654 14:33:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:28:51.654 14:33:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:28:51.654 14:33:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:28:51.654 14:33:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:28:51.654 14:33:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:28:51.654 14:33:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:28:51.654 14:33:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:28:51.654 14:33:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:28:51.654 14:33:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:28:51.654 14:33:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:28:51.654 14:33:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:28:51.654 14:33:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:28:51.654 14:33:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:51.654 14:33:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:51.654 14:33:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:28:51.654 14:33:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:51.654 14:33:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:51.654 14:33:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:51.654 14:33:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@300 -- # return 0 00:28:51.654 00:28:51.654 real 0m22.605s 00:28:51.654 user 0m42.879s 00:28:51.654 sys 0m1.727s 00:28:51.654 ************************************ 00:28:51.654 END TEST nvmf_mdns_discovery 00:28:51.654 ************************************ 00:28:51.654 14:33:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:51.654 14:33:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:51.917 14:33:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:28:51.917 14:33:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:28:51.917 14:33:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:28:51.917 14:33:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:51.917 14:33:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.917 ************************************ 00:28:51.917 START TEST nvmf_host_multipath 00:28:51.917 ************************************ 00:28:51.917 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:28:51.917 * Looking for test storage... 00:28:51.917 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:28:51.917 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:51.917 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:28:51.917 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:51.917 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:51.917 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:51.917 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:51.917 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:51.917 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:28:51.917 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:28:51.917 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:28:51.917 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:28:51.917 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:28:51.917 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:28:51.917 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:28:51.917 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:51.917 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:28:51.917 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:28:51.917 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:51.917 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:51.917 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:28:51.917 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:28:51.917 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:51.917 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:28:51.917 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:28:51.917 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:28:51.917 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:28:51.917 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:51.917 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:28:51.917 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:28:51.917 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:51.917 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:51.917 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:28:51.917 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:51.917 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:51.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:51.917 --rc genhtml_branch_coverage=1 00:28:51.917 --rc genhtml_function_coverage=1 00:28:51.917 --rc genhtml_legend=1 00:28:51.917 --rc geninfo_all_blocks=1 00:28:51.917 --rc geninfo_unexecuted_blocks=1 00:28:51.917 00:28:51.917 ' 00:28:51.917 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:51.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:51.918 --rc genhtml_branch_coverage=1 00:28:51.918 --rc genhtml_function_coverage=1 00:28:51.918 --rc genhtml_legend=1 00:28:51.918 --rc geninfo_all_blocks=1 00:28:51.918 --rc geninfo_unexecuted_blocks=1 00:28:51.918 00:28:51.918 ' 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:51.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:51.918 --rc genhtml_branch_coverage=1 00:28:51.918 --rc genhtml_function_coverage=1 00:28:51.918 --rc genhtml_legend=1 00:28:51.918 --rc geninfo_all_blocks=1 00:28:51.918 --rc geninfo_unexecuted_blocks=1 00:28:51.918 00:28:51.918 ' 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:51.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:51.918 --rc genhtml_branch_coverage=1 00:28:51.918 --rc genhtml_function_coverage=1 00:28:51.918 --rc genhtml_legend=1 00:28:51.918 --rc geninfo_all_blocks=1 00:28:51.918 --rc geninfo_unexecuted_blocks=1 00:28:51.918 00:28:51.918 ' 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:51.918 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:28:51.918 Cannot find device "nvmf_init_br" 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:28:51.918 Cannot find device "nvmf_init_br2" 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:28:51.918 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:28:52.182 Cannot find device "nvmf_tgt_br" 00:28:52.182 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:28:52.182 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:28:52.182 Cannot find device "nvmf_tgt_br2" 00:28:52.182 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:28:52.182 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:28:52.182 Cannot find device "nvmf_init_br" 00:28:52.182 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:28:52.182 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:28:52.182 Cannot find device "nvmf_init_br2" 00:28:52.182 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:28:52.182 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:28:52.182 Cannot find device "nvmf_tgt_br" 00:28:52.182 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:28:52.182 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:28:52.182 Cannot find device "nvmf_tgt_br2" 00:28:52.182 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:28:52.182 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:28:52.182 Cannot find device "nvmf_br" 00:28:52.182 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:28:52.182 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:28:52.182 Cannot find device "nvmf_init_if" 00:28:52.182 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:28:52.182 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:28:52.182 Cannot find device "nvmf_init_if2" 00:28:52.182 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:28:52.182 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:52.182 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:52.182 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:28:52.182 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:52.182 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:52.182 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:28:52.182 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:28:52.182 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:52.182 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:28:52.182 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:52.182 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:52.182 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:52.182 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:52.182 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:52.182 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:28:52.182 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:28:52.182 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:28:52.182 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:28:52.182 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:28:52.182 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:28:52.182 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:28:52.182 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:28:52.182 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:28:52.182 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:52.182 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:52.182 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:52.182 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:28:52.182 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:28:52.183 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:28:52.444 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:28:52.444 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:52.444 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:52.444 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:52.444 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:28:52.444 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:28:52.444 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:28:52.444 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:52.444 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:28:52.444 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:28:52.444 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:52.444 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:28:52.444 00:28:52.444 --- 10.0.0.3 ping statistics --- 00:28:52.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:52.444 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:28:52.444 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:28:52.444 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:28:52.444 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:28:52.444 00:28:52.444 --- 10.0.0.4 ping statistics --- 00:28:52.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:52.444 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:28:52.444 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:52.444 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:52.444 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:28:52.444 00:28:52.444 --- 10.0.0.1 ping statistics --- 00:28:52.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:52.444 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:28:52.444 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:28:52.444 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:52.444 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:28:52.444 00:28:52.444 --- 10.0.0.2 ping statistics --- 00:28:52.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:52.444 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:28:52.444 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:52.444 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@461 -- # return 0 00:28:52.444 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:52.444 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:52.444 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:52.444 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:52.444 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:52.444 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:52.444 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:52.444 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:28:52.444 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:52.444 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:52.444 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:28:52.444 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # nvmfpid=104756 00:28:52.444 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # waitforlisten 104756 00:28:52.444 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@833 -- # '[' -z 104756 ']' 00:28:52.444 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:52.444 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:52.444 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:28:52.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:52.444 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:52.444 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:52.444 14:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:28:52.444 [2024-11-06 14:33:19.964921] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:28:52.445 [2024-11-06 14:33:19.965090] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:52.705 [2024-11-06 14:33:20.124137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:52.705 [2024-11-06 14:33:20.246253] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:52.705 [2024-11-06 14:33:20.246322] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:52.705 [2024-11-06 14:33:20.246338] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:52.705 [2024-11-06 14:33:20.246353] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:52.705 [2024-11-06 14:33:20.246363] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:52.705 [2024-11-06 14:33:20.248128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:52.705 [2024-11-06 14:33:20.248129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:53.279 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:53.279 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@866 -- # return 0 00:28:53.279 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:53.279 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:53.279 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:28:53.279 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:53.279 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=104756 00:28:53.279 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:53.541 [2024-11-06 14:33:21.056499] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:53.541 14:33:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:28:53.800 Malloc0 00:28:53.800 14:33:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:28:54.058 14:33:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:54.058 14:33:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:28:54.316 [2024-11-06 14:33:21.876631] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:28:54.316 14:33:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:28:54.574 [2024-11-06 14:33:22.068693] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:28:54.574 14:33:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=104854 00:28:54.574 14:33:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:54.574 14:33:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 104854 /var/tmp/bdevperf.sock 00:28:54.575 14:33:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:28:54.575 14:33:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@833 -- # '[' -z 104854 ']' 00:28:54.575 14:33:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:54.575 14:33:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:54.575 14:33:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:54.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:54.575 14:33:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:54.575 14:33:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:28:55.509 14:33:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:55.509 14:33:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@866 -- # return 0 00:28:55.509 14:33:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:28:55.767 14:33:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:28:56.026 Nvme0n1 00:28:56.026 14:33:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:28:56.284 Nvme0n1 00:28:56.284 14:33:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:28:56.284 14:33:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:28:57.219 14:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:28:57.219 14:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:28:57.478 14:33:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:28:57.736 14:33:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:28:57.736 14:33:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=104940 00:28:57.736 14:33:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:28:57.736 14:33:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 104756 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:29:04.358 14:33:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:29:04.358 14:33:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:29:04.358 14:33:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:29:04.358 14:33:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:04.358 Attaching 4 probes... 00:29:04.358 @path[10.0.0.3, 4421]: 23174 00:29:04.358 @path[10.0.0.3, 4421]: 23594 00:29:04.358 @path[10.0.0.3, 4421]: 23478 00:29:04.358 @path[10.0.0.3, 4421]: 23230 00:29:04.358 @path[10.0.0.3, 4421]: 23233 00:29:04.358 14:33:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:29:04.358 14:33:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:29:04.358 14:33:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:29:04.358 14:33:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:29:04.358 14:33:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:29:04.358 14:33:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:29:04.358 14:33:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 104940 00:29:04.358 14:33:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:04.358 14:33:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:29:04.358 14:33:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:29:04.358 14:33:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:29:04.358 14:33:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:29:04.358 14:33:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 104756 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:29:04.358 14:33:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=105067 00:29:04.358 14:33:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:29:10.920 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:29:10.920 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:29:10.920 14:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:29:10.920 14:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:10.920 Attaching 4 probes... 00:29:10.920 @path[10.0.0.3, 4420]: 22210 00:29:10.920 @path[10.0.0.3, 4420]: 22497 00:29:10.920 @path[10.0.0.3, 4420]: 22770 00:29:10.920 @path[10.0.0.3, 4420]: 22368 00:29:10.920 @path[10.0.0.3, 4420]: 22077 00:29:10.920 14:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:29:10.920 14:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:29:10.920 14:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:29:10.920 14:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:29:10.920 14:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:29:10.920 14:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:29:10.920 14:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 105067 00:29:10.920 14:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:10.920 14:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:29:10.920 14:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:29:10.920 14:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:29:10.920 14:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:29:10.920 14:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 104756 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:29:10.920 14:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=105203 00:29:10.920 14:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:29:17.478 14:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:29:17.478 14:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:29:17.478 14:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:29:17.478 14:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:17.478 Attaching 4 probes... 00:29:17.478 @path[10.0.0.3, 4421]: 13903 00:29:17.478 @path[10.0.0.3, 4421]: 23232 00:29:17.478 @path[10.0.0.3, 4421]: 22950 00:29:17.478 @path[10.0.0.3, 4421]: 23306 00:29:17.478 @path[10.0.0.3, 4421]: 22757 00:29:17.478 14:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:29:17.478 14:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:29:17.478 14:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:29:17.478 14:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:29:17.478 14:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:29:17.478 14:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:29:17.478 14:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 105203 00:29:17.478 14:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:17.478 14:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:29:17.478 14:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:29:17.478 14:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:29:17.737 14:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:29:17.737 14:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=105328 00:29:17.737 14:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:29:17.737 14:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 104756 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:29:24.307 14:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:29:24.307 14:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:29:24.307 14:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:29:24.307 14:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:24.307 Attaching 4 probes... 00:29:24.307 00:29:24.307 00:29:24.307 00:29:24.307 00:29:24.307 00:29:24.307 14:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:29:24.307 14:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:29:24.307 14:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:29:24.307 14:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:29:24.307 14:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:29:24.307 14:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:29:24.307 14:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 105328 00:29:24.307 14:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:24.307 14:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:29:24.307 14:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:29:24.307 14:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:29:24.307 14:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:29:24.307 14:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 104756 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:29:24.307 14:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=105464 00:29:24.307 14:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:29:30.868 14:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:29:30.868 14:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:29:30.868 14:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:29:30.868 14:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:30.868 Attaching 4 probes... 00:29:30.868 @path[10.0.0.3, 4421]: 22850 00:29:30.868 @path[10.0.0.3, 4421]: 22396 00:29:30.868 @path[10.0.0.3, 4421]: 22909 00:29:30.868 @path[10.0.0.3, 4421]: 23128 00:29:30.868 @path[10.0.0.3, 4421]: 23286 00:29:30.868 14:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:29:30.868 14:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:29:30.868 14:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:29:30.868 14:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:29:30.868 14:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:29:30.868 14:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:29:30.868 14:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 105464 00:29:30.868 14:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:30.868 14:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:29:30.868 [2024-11-06 14:33:58.182227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:29:30.868 [2024-11-06 14:33:58.182262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:29:30.868 [2024-11-06 14:33:58.182270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:29:30.868 [2024-11-06 14:33:58.182276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:29:30.868 [2024-11-06 14:33:58.182282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:29:30.868 [2024-11-06 14:33:58.182287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:29:30.868 [2024-11-06 14:33:58.182293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:29:30.868 [2024-11-06 14:33:58.182298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:29:30.868 [2024-11-06 14:33:58.182303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:29:30.868 [2024-11-06 14:33:58.182309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:29:30.868 [2024-11-06 14:33:58.182314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:29:30.868 [2024-11-06 14:33:58.182320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:29:30.868 [2024-11-06 14:33:58.182325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:29:30.868 [2024-11-06 14:33:58.182330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:29:30.868 [2024-11-06 14:33:58.182335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:29:30.868 [2024-11-06 14:33:58.182340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:29:30.868 [2024-11-06 14:33:58.182346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:29:30.868 [2024-11-06 14:33:58.182351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:29:30.868 [2024-11-06 14:33:58.182356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:29:30.868 [2024-11-06 14:33:58.182361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:29:30.868 [2024-11-06 14:33:58.182366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:29:30.868 [2024-11-06 14:33:58.182372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:29:30.868 [2024-11-06 14:33:58.182377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:29:30.868 [2024-11-06 14:33:58.182382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:29:30.868 [2024-11-06 14:33:58.182387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:29:30.868 [2024-11-06 14:33:58.182392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:29:30.868 [2024-11-06 14:33:58.182398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:29:30.868 [2024-11-06 14:33:58.182403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:29:30.868 [2024-11-06 14:33:58.182408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:29:30.868 [2024-11-06 14:33:58.182414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:29:30.868 [2024-11-06 14:33:58.182419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:29:30.868 [2024-11-06 14:33:58.182425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:29:30.868 [2024-11-06 14:33:58.182430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:29:30.868 [2024-11-06 14:33:58.182435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:29:30.868 [2024-11-06 14:33:58.182441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:29:30.868 [2024-11-06 14:33:58.182446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:29:30.868 [2024-11-06 14:33:58.182451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:29:30.868 [2024-11-06 14:33:58.182456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:29:30.868 [2024-11-06 14:33:58.182461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:29:30.868 [2024-11-06 14:33:58.182467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:29:30.868 [2024-11-06 14:33:58.182471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:29:30.868 [2024-11-06 14:33:58.182476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:29:30.868 [2024-11-06 14:33:58.182482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:29:30.868 [2024-11-06 14:33:58.182487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:29:30.868 [2024-11-06 14:33:58.182492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:29:30.868 [2024-11-06 14:33:58.182497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:29:30.868 [2024-11-06 14:33:58.182502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:29:30.868 [2024-11-06 14:33:58.182508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:29:30.868 [2024-11-06 14:33:58.182513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:29:30.868 [2024-11-06 14:33:58.182518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:29:30.868 [2024-11-06 14:33:58.182523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:29:30.868 [2024-11-06 14:33:58.182528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:29:30.868 [2024-11-06 14:33:58.182533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:29:30.868 [2024-11-06 14:33:58.182539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:29:30.868 [2024-11-06 14:33:58.182543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:29:30.868 [2024-11-06 14:33:58.182548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:29:30.868 [2024-11-06 14:33:58.182553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:29:30.868 [2024-11-06 14:33:58.182559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:29:30.868 [2024-11-06 14:33:58.182564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:29:30.868 [2024-11-06 14:33:58.182569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:29:30.869 [2024-11-06 14:33:58.182574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:29:30.869 [2024-11-06 14:33:58.182579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:29:30.869 14:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:29:31.804 14:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:29:31.804 14:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=105595 00:29:31.804 14:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:29:31.804 14:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 104756 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:29:38.424 14:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:29:38.424 14:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:29:38.424 14:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:29:38.424 14:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:38.424 Attaching 4 probes... 00:29:38.424 @path[10.0.0.3, 4420]: 21552 00:29:38.424 @path[10.0.0.3, 4420]: 21797 00:29:38.424 @path[10.0.0.3, 4420]: 21934 00:29:38.424 @path[10.0.0.3, 4420]: 21794 00:29:38.424 @path[10.0.0.3, 4420]: 21611 00:29:38.424 14:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:29:38.424 14:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:29:38.424 14:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:29:38.424 14:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:29:38.424 14:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:29:38.424 14:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:29:38.424 14:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 105595 00:29:38.424 14:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:38.424 14:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:29:38.424 [2024-11-06 14:34:05.630578] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:29:38.424 14:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:29:38.424 14:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:29:45.016 14:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:29:45.016 14:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 104756 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:29:45.016 14:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=105793 00:29:45.016 14:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:29:50.281 14:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:29:50.281 14:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:29:50.539 14:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:29:50.539 14:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:50.539 Attaching 4 probes... 00:29:50.539 @path[10.0.0.3, 4421]: 17203 00:29:50.539 @path[10.0.0.3, 4421]: 21195 00:29:50.539 @path[10.0.0.3, 4421]: 22872 00:29:50.539 @path[10.0.0.3, 4421]: 22860 00:29:50.539 @path[10.0.0.3, 4421]: 22914 00:29:50.539 14:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:29:50.539 14:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:29:50.539 14:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:29:50.539 14:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:29:50.539 14:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:29:50.539 14:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:29:50.539 14:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 105793 00:29:50.539 14:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:50.539 14:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 104854 00:29:50.539 14:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@952 -- # '[' -z 104854 ']' 00:29:50.539 14:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # kill -0 104854 00:29:50.539 14:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@957 -- # uname 00:29:50.539 14:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:50.540 14:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 104854 00:29:50.540 killing process with pid 104854 00:29:50.540 14:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:29:50.540 14:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:29:50.540 14:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@970 -- # echo 'killing process with pid 104854' 00:29:50.540 14:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@971 -- # kill 104854 00:29:50.540 14:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@976 -- # wait 104854 00:29:50.540 { 00:29:50.540 "results": [ 00:29:50.540 { 00:29:50.540 "job": "Nvme0n1", 00:29:50.540 "core_mask": "0x4", 00:29:50.540 "workload": "verify", 00:29:50.540 "status": "terminated", 00:29:50.540 "verify_range": { 00:29:50.540 "start": 0, 00:29:50.540 "length": 16384 00:29:50.540 }, 00:29:50.540 "queue_depth": 128, 00:29:50.540 "io_size": 4096, 00:29:50.540 "runtime": 54.236072, 00:29:50.540 "iops": 9532.89906392926, 00:29:50.540 "mibps": 37.237886968473674, 00:29:50.540 "io_failed": 0, 00:29:50.540 "io_timeout": 0, 00:29:50.540 "avg_latency_us": 13400.966598578174, 00:29:50.540 "min_latency_us": 623.8523076923077, 00:29:50.540 "max_latency_us": 7020619.618461538 00:29:50.540 } 00:29:50.540 ], 00:29:50.540 "core_count": 1 00:29:50.540 } 00:29:51.480 14:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 104854 00:29:51.480 14:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:29:51.480 [2024-11-06 14:33:22.151884] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:29:51.480 [2024-11-06 14:33:22.152016] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104854 ] 00:29:51.480 [2024-11-06 14:33:22.310675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:51.480 [2024-11-06 14:33:22.405500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:51.480 Running I/O for 90 seconds... 00:29:51.480 11487.00 IOPS, 44.87 MiB/s [2024-11-06T14:34:19.115Z] 11677.50 IOPS, 45.62 MiB/s [2024-11-06T14:34:19.115Z] 11745.67 IOPS, 45.88 MiB/s [2024-11-06T14:34:19.115Z] 11754.50 IOPS, 45.92 MiB/s [2024-11-06T14:34:19.115Z] 11723.40 IOPS, 45.79 MiB/s [2024-11-06T14:34:19.115Z] 11735.67 IOPS, 45.84 MiB/s [2024-11-06T14:34:19.115Z] 11716.29 IOPS, 45.77 MiB/s [2024-11-06T14:34:19.115Z] [2024-11-06 14:33:31.853520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:91656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.480 [2024-11-06 14:33:31.853574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:51.480 [2024-11-06 14:33:31.853630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:91664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.480 [2024-11-06 14:33:31.853644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:51.480 [2024-11-06 14:33:31.853660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:91672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.480 [2024-11-06 14:33:31.853670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:51.480 [2024-11-06 14:33:31.853685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:91680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.480 [2024-11-06 14:33:31.853694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:51.480 [2024-11-06 14:33:31.853710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:91688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.480 [2024-11-06 14:33:31.853719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:51.480 [2024-11-06 14:33:31.853734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:91696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.480 [2024-11-06 14:33:31.853744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:51.480 [2024-11-06 14:33:31.853759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:91704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.480 [2024-11-06 14:33:31.853769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:51.480 [2024-11-06 14:33:31.853783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:91712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.480 [2024-11-06 14:33:31.853793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:51.480 [2024-11-06 14:33:31.853808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:91720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.480 [2024-11-06 14:33:31.853817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:51.480 [2024-11-06 14:33:31.853832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:91728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.480 [2024-11-06 14:33:31.853842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:51.480 [2024-11-06 14:33:31.853870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:91736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.480 [2024-11-06 14:33:31.853880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:51.480 [2024-11-06 14:33:31.853896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:91744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.480 [2024-11-06 14:33:31.853905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:51.480 [2024-11-06 14:33:31.853921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:91752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.480 [2024-11-06 14:33:31.853931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:51.480 [2024-11-06 14:33:31.854234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:91144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.480 [2024-11-06 14:33:31.854253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:51.480 [2024-11-06 14:33:31.854273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:91152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.480 [2024-11-06 14:33:31.854284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:51.480 [2024-11-06 14:33:31.854300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:91160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.480 [2024-11-06 14:33:31.854310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:51.480 [2024-11-06 14:33:31.854325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:91168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.480 [2024-11-06 14:33:31.854335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:51.480 [2024-11-06 14:33:31.854350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:91176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.480 [2024-11-06 14:33:31.854360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:51.480 [2024-11-06 14:33:31.854375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:91184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.480 [2024-11-06 14:33:31.854384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:51.480 [2024-11-06 14:33:31.854399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:91192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.480 [2024-11-06 14:33:31.854410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:51.480 [2024-11-06 14:33:31.854426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:91200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.480 [2024-11-06 14:33:31.854435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:51.480 [2024-11-06 14:33:31.854450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:91208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.480 [2024-11-06 14:33:31.854462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:51.480 [2024-11-06 14:33:31.854483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:91216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.480 [2024-11-06 14:33:31.854494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:51.480 [2024-11-06 14:33:31.854509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:91224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.480 [2024-11-06 14:33:31.854518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:51.480 [2024-11-06 14:33:31.854533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:91232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.480 [2024-11-06 14:33:31.854542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:51.480 [2024-11-06 14:33:31.854557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:91240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.480 [2024-11-06 14:33:31.854567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:51.480 [2024-11-06 14:33:31.854583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:91248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.480 [2024-11-06 14:33:31.854593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.480 [2024-11-06 14:33:31.854609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:91256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.480 [2024-11-06 14:33:31.854618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:51.480 [2024-11-06 14:33:31.854634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:91264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.480 [2024-11-06 14:33:31.854644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:51.481 [2024-11-06 14:33:31.854660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:91272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.481 [2024-11-06 14:33:31.854669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:51.481 [2024-11-06 14:33:31.854685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:91280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.481 [2024-11-06 14:33:31.854694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:51.481 [2024-11-06 14:33:31.854709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:91288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.481 [2024-11-06 14:33:31.854719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:51.481 [2024-11-06 14:33:31.854733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:91296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.481 [2024-11-06 14:33:31.854743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:51.481 [2024-11-06 14:33:31.854758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:91304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.481 [2024-11-06 14:33:31.854767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:51.481 [2024-11-06 14:33:31.854783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:91312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.481 [2024-11-06 14:33:31.854795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:51.481 [2024-11-06 14:33:31.854811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:91320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.481 [2024-11-06 14:33:31.854821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:51.481 [2024-11-06 14:33:31.854836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:91328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.481 [2024-11-06 14:33:31.854845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:51.481 [2024-11-06 14:33:31.854860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:91336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.481 [2024-11-06 14:33:31.854870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:51.481 [2024-11-06 14:33:31.854885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:91344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.481 [2024-11-06 14:33:31.854894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:51.481 [2024-11-06 14:33:31.854909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:91352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.481 [2024-11-06 14:33:31.854919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:51.481 [2024-11-06 14:33:31.854934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:91360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.481 [2024-11-06 14:33:31.854944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:51.481 [2024-11-06 14:33:31.854970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:91368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.481 [2024-11-06 14:33:31.854979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:51.481 [2024-11-06 14:33:31.854994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:91376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.481 [2024-11-06 14:33:31.855004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:51.481 [2024-11-06 14:33:31.855019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:91384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.481 [2024-11-06 14:33:31.855040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:51.481 [2024-11-06 14:33:31.855056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:91392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.481 [2024-11-06 14:33:31.855066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:51.481 [2024-11-06 14:33:31.855082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:91400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.481 [2024-11-06 14:33:31.855092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:51.481 [2024-11-06 14:33:31.855107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:91408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.481 [2024-11-06 14:33:31.855120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:51.481 [2024-11-06 14:33:31.855136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:91416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.481 [2024-11-06 14:33:31.855146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:51.481 [2024-11-06 14:33:31.855161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:91424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.481 [2024-11-06 14:33:31.855170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:51.481 [2024-11-06 14:33:31.855186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:91432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.481 [2024-11-06 14:33:31.855195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:51.481 [2024-11-06 14:33:31.855210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:91440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.481 [2024-11-06 14:33:31.855220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:51.481 [2024-11-06 14:33:31.855235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:91448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.481 [2024-11-06 14:33:31.855245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:51.481 [2024-11-06 14:33:31.855260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:91456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.481 [2024-11-06 14:33:31.855270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:51.481 [2024-11-06 14:33:31.855285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:91464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.481 [2024-11-06 14:33:31.855294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:51.481 [2024-11-06 14:33:31.855310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:91472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.481 [2024-11-06 14:33:31.855320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:51.481 [2024-11-06 14:33:31.855336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:91480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.481 [2024-11-06 14:33:31.855345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:51.481 [2024-11-06 14:33:31.855360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:91488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.481 [2024-11-06 14:33:31.855370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:51.481 [2024-11-06 14:33:31.855384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:91496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.481 [2024-11-06 14:33:31.855394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:51.481 [2024-11-06 14:33:31.855409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:91504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.481 [2024-11-06 14:33:31.855418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.481 [2024-11-06 14:33:31.855437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:91512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.481 [2024-11-06 14:33:31.855446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:51.481 [2024-11-06 14:33:31.855462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:91520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.481 [2024-11-06 14:33:31.855472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:51.481 [2024-11-06 14:33:31.855487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:91528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.481 [2024-11-06 14:33:31.855496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:51.481 [2024-11-06 14:33:31.855511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:91536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.481 [2024-11-06 14:33:31.855521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:51.481 [2024-11-06 14:33:31.855537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:91544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.481 [2024-11-06 14:33:31.855546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:51.481 [2024-11-06 14:33:31.855562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:91552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.481 [2024-11-06 14:33:31.855571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:51.481 [2024-11-06 14:33:31.855586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:91560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.481 [2024-11-06 14:33:31.855595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:51.481 [2024-11-06 14:33:31.855610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:91568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.481 [2024-11-06 14:33:31.855620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:51.481 [2024-11-06 14:33:31.855635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:91576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.482 [2024-11-06 14:33:31.855644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:51.482 [2024-11-06 14:33:31.855659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:91584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.482 [2024-11-06 14:33:31.855669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:51.482 [2024-11-06 14:33:31.855684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:91592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.482 [2024-11-06 14:33:31.855693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:51.482 [2024-11-06 14:33:31.855708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:91600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.482 [2024-11-06 14:33:31.855717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:51.482 [2024-11-06 14:33:31.855735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:91608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.482 [2024-11-06 14:33:31.855744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:51.482 [2024-11-06 14:33:31.855760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:91616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.482 [2024-11-06 14:33:31.855769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:51.482 [2024-11-06 14:33:31.855784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:91624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.482 [2024-11-06 14:33:31.855793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:51.482 [2024-11-06 14:33:31.855808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:91632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.482 [2024-11-06 14:33:31.855817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:51.482 [2024-11-06 14:33:31.855832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:91640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.482 [2024-11-06 14:33:31.855842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:51.482 [2024-11-06 14:33:31.855857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:91648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.482 [2024-11-06 14:33:31.855867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:51.482 [2024-11-06 14:33:31.855882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:91760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.482 [2024-11-06 14:33:31.855892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:51.482 [2024-11-06 14:33:31.855907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:91768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.482 [2024-11-06 14:33:31.855916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:51.482 [2024-11-06 14:33:31.855932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:91776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.482 [2024-11-06 14:33:31.855941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:51.482 [2024-11-06 14:33:31.855966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:91784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.482 [2024-11-06 14:33:31.855976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:51.482 [2024-11-06 14:33:31.855991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:91792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.482 [2024-11-06 14:33:31.856001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:51.482 [2024-11-06 14:33:31.856016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:91800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.482 [2024-11-06 14:33:31.856026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:51.482 [2024-11-06 14:33:31.856041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:91808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.482 [2024-11-06 14:33:31.856053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:51.482 [2024-11-06 14:33:31.856069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:91816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.482 [2024-11-06 14:33:31.856078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:51.482 [2024-11-06 14:33:31.856094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:91824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.482 [2024-11-06 14:33:31.856104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:51.482 [2024-11-06 14:33:31.856118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:91832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.482 [2024-11-06 14:33:31.856129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:51.482 [2024-11-06 14:33:31.856144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:91840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.482 [2024-11-06 14:33:31.856153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:51.482 [2024-11-06 14:33:31.856169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:91848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.482 [2024-11-06 14:33:31.856178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.482 [2024-11-06 14:33:31.856193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:91856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.482 [2024-11-06 14:33:31.856202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.482 [2024-11-06 14:33:31.856217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:91864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.482 [2024-11-06 14:33:31.856227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.482 [2024-11-06 14:33:31.856241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:91872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.482 [2024-11-06 14:33:31.856251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:51.482 [2024-11-06 14:33:31.856896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:91880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.482 [2024-11-06 14:33:31.856915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:51.482 [2024-11-06 14:33:31.856938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:91888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.482 [2024-11-06 14:33:31.856958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:51.482 [2024-11-06 14:33:31.856974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:91896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.482 [2024-11-06 14:33:31.856984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:51.482 [2024-11-06 14:33:31.856999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:91904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.482 [2024-11-06 14:33:31.857015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:51.482 [2024-11-06 14:33:31.857030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:91912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.482 [2024-11-06 14:33:31.857040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:51.482 [2024-11-06 14:33:31.857055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:91920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.482 [2024-11-06 14:33:31.857064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:51.482 [2024-11-06 14:33:31.857080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:91928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.482 [2024-11-06 14:33:31.857090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:51.482 [2024-11-06 14:33:31.857104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:91936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.482 [2024-11-06 14:33:31.857114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:51.482 [2024-11-06 14:33:31.857129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:91944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.482 [2024-11-06 14:33:31.857138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:51.482 [2024-11-06 14:33:31.857153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:91952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.482 [2024-11-06 14:33:31.857162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:51.482 [2024-11-06 14:33:31.857178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:91960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.482 [2024-11-06 14:33:31.857187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:51.482 [2024-11-06 14:33:31.857203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:91968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.482 [2024-11-06 14:33:31.857212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:51.482 [2024-11-06 14:33:31.857227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:91976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.482 [2024-11-06 14:33:31.857236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:51.482 [2024-11-06 14:33:31.857252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:91984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.482 [2024-11-06 14:33:31.857261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:51.482 [2024-11-06 14:33:31.857276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:91992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.482 [2024-11-06 14:33:31.857285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:51.483 [2024-11-06 14:33:31.857301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:92000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.483 [2024-11-06 14:33:31.857315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:51.483 [2024-11-06 14:33:31.857334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:92008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.483 [2024-11-06 14:33:31.857344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:51.483 [2024-11-06 14:33:31.857359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:92016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.483 [2024-11-06 14:33:31.857369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:51.483 [2024-11-06 14:33:31.857384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:92024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.483 [2024-11-06 14:33:31.857393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:51.483 [2024-11-06 14:33:31.857409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:92032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.483 [2024-11-06 14:33:31.857419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:51.483 [2024-11-06 14:33:31.857434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:92040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.483 [2024-11-06 14:33:31.857444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:51.483 [2024-11-06 14:33:31.857458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:92048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.483 [2024-11-06 14:33:31.857468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:51.483 [2024-11-06 14:33:31.857483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:92056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.483 [2024-11-06 14:33:31.857492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:51.483 [2024-11-06 14:33:31.857507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:92064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.483 [2024-11-06 14:33:31.857516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:51.483 [2024-11-06 14:33:31.857532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:92072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.483 [2024-11-06 14:33:31.857541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:51.483 [2024-11-06 14:33:31.857556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:92080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.483 [2024-11-06 14:33:31.857566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:51.483 [2024-11-06 14:33:31.857581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:92088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.483 [2024-11-06 14:33:31.857591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:51.483 [2024-11-06 14:33:31.857606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:92096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.483 [2024-11-06 14:33:31.857615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:51.483 [2024-11-06 14:33:31.857633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:92104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.483 [2024-11-06 14:33:31.857643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:51.483 [2024-11-06 14:33:31.857658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:92112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.483 [2024-11-06 14:33:31.857667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:51.483 [2024-11-06 14:33:31.857682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:92120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.483 [2024-11-06 14:33:31.857691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.483 [2024-11-06 14:33:31.857706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:92128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.483 [2024-11-06 14:33:31.857716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:51.483 [2024-11-06 14:33:31.857732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:92136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.483 [2024-11-06 14:33:31.857742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:51.483 [2024-11-06 14:33:31.857756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:92144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.483 [2024-11-06 14:33:31.857766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:51.483 [2024-11-06 14:33:31.857781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.483 [2024-11-06 14:33:31.857791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:51.483 [2024-11-06 14:33:31.857806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.483 [2024-11-06 14:33:31.857815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:51.483 11705.62 IOPS, 45.73 MiB/s [2024-11-06T14:34:19.118Z] 11644.78 IOPS, 45.49 MiB/s [2024-11-06T14:34:19.118Z] 11607.80 IOPS, 45.34 MiB/s [2024-11-06T14:34:19.118Z] 11586.00 IOPS, 45.26 MiB/s [2024-11-06T14:34:19.118Z] 11550.92 IOPS, 45.12 MiB/s [2024-11-06T14:34:19.118Z] 11539.85 IOPS, 45.08 MiB/s [2024-11-06T14:34:19.118Z] 11511.14 IOPS, 44.97 MiB/s [2024-11-06T14:34:19.118Z] [2024-11-06 14:33:38.292570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:13728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.483 [2024-11-06 14:33:38.292628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:51.483 [2024-11-06 14:33:38.292685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:13736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.483 [2024-11-06 14:33:38.292702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:51.483 [2024-11-06 14:33:38.292723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:13744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.483 [2024-11-06 14:33:38.292736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:51.483 [2024-11-06 14:33:38.292756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:13752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.483 [2024-11-06 14:33:38.292788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:51.483 [2024-11-06 14:33:38.292809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:13760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.483 [2024-11-06 14:33:38.292821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:51.483 [2024-11-06 14:33:38.292841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:13768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.483 [2024-11-06 14:33:38.292854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:51.483 [2024-11-06 14:33:38.292874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:13776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.483 [2024-11-06 14:33:38.292886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:51.483 [2024-11-06 14:33:38.292905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:13784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.483 [2024-11-06 14:33:38.292917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.483 [2024-11-06 14:33:38.292937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:13792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.483 [2024-11-06 14:33:38.292961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:51.483 [2024-11-06 14:33:38.292982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:13800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.483 [2024-11-06 14:33:38.292994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:51.483 [2024-11-06 14:33:38.293013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:13808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.483 [2024-11-06 14:33:38.293025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:51.483 [2024-11-06 14:33:38.293045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:13816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.483 [2024-11-06 14:33:38.293057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:51.483 [2024-11-06 14:33:38.293076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:13824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.483 [2024-11-06 14:33:38.293088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:51.483 [2024-11-06 14:33:38.293108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:13832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.483 [2024-11-06 14:33:38.293120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:51.483 [2024-11-06 14:33:38.293140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:13840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.483 [2024-11-06 14:33:38.293152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:51.483 [2024-11-06 14:33:38.293172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:13848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.483 [2024-11-06 14:33:38.293183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:51.484 [2024-11-06 14:33:38.293209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:13856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.484 [2024-11-06 14:33:38.293222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:51.484 [2024-11-06 14:33:38.293243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:13864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.484 [2024-11-06 14:33:38.293255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:51.484 [2024-11-06 14:33:38.293275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:13872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.484 [2024-11-06 14:33:38.293288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:51.484 [2024-11-06 14:33:38.293308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:13880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.484 [2024-11-06 14:33:38.293321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:51.484 [2024-11-06 14:33:38.293341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:13888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.484 [2024-11-06 14:33:38.293353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:51.484 [2024-11-06 14:33:38.293374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.484 [2024-11-06 14:33:38.293387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:51.484 [2024-11-06 14:33:38.293407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:13904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.484 [2024-11-06 14:33:38.293419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:51.484 [2024-11-06 14:33:38.293441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:13912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.484 [2024-11-06 14:33:38.293453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:51.484 [2024-11-06 14:33:38.293915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:13920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.484 [2024-11-06 14:33:38.293938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:51.484 [2024-11-06 14:33:38.293977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.484 [2024-11-06 14:33:38.293990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:51.484 [2024-11-06 14:33:38.294012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.484 [2024-11-06 14:33:38.294026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:51.484 [2024-11-06 14:33:38.294048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:13944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.484 [2024-11-06 14:33:38.294060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:51.484 [2024-11-06 14:33:38.294090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:13952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.484 [2024-11-06 14:33:38.294103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:51.484 [2024-11-06 14:33:38.294125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.484 [2024-11-06 14:33:38.294137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:51.484 [2024-11-06 14:33:38.294159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:13384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.484 [2024-11-06 14:33:38.294172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:51.484 [2024-11-06 14:33:38.294193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:13392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.484 [2024-11-06 14:33:38.294206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:51.484 [2024-11-06 14:33:38.294227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:13400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.484 [2024-11-06 14:33:38.294239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:51.484 [2024-11-06 14:33:38.294262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:13408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.484 [2024-11-06 14:33:38.294274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:51.484 [2024-11-06 14:33:38.294296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.484 [2024-11-06 14:33:38.294308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:51.484 [2024-11-06 14:33:38.294330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:13424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.484 [2024-11-06 14:33:38.294342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:51.484 [2024-11-06 14:33:38.294363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:13432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.484 [2024-11-06 14:33:38.294376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:51.484 [2024-11-06 14:33:38.294397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:13440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.484 [2024-11-06 14:33:38.294410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:51.484 [2024-11-06 14:33:38.294432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:13448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.484 [2024-11-06 14:33:38.294444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:51.484 [2024-11-06 14:33:38.294465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.484 [2024-11-06 14:33:38.294478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.484 [2024-11-06 14:33:38.294500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:13464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.484 [2024-11-06 14:33:38.294517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:51.484 [2024-11-06 14:33:38.294538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:13472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.484 [2024-11-06 14:33:38.294551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:51.484 [2024-11-06 14:33:38.294572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:13480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.484 [2024-11-06 14:33:38.294584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:51.484 [2024-11-06 14:33:38.294606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:13488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.484 [2024-11-06 14:33:38.294627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:51.484 [2024-11-06 14:33:38.294648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.484 [2024-11-06 14:33:38.294661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:51.484 [2024-11-06 14:33:38.294683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:13504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.484 [2024-11-06 14:33:38.294695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:51.484 [2024-11-06 14:33:38.294716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:13512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.484 [2024-11-06 14:33:38.294729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:51.484 [2024-11-06 14:33:38.294751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.484 [2024-11-06 14:33:38.294764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:51.484 [2024-11-06 14:33:38.294785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:13528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.484 [2024-11-06 14:33:38.294797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:51.484 [2024-11-06 14:33:38.294818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:13536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.485 [2024-11-06 14:33:38.294831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:51.485 [2024-11-06 14:33:38.294852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:13544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.485 [2024-11-06 14:33:38.294864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:51.485 [2024-11-06 14:33:38.294886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:13552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.485 [2024-11-06 14:33:38.294898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:51.485 [2024-11-06 14:33:38.294919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:13560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.485 [2024-11-06 14:33:38.294936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:51.485 [2024-11-06 14:33:38.294968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:13568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.485 [2024-11-06 14:33:38.294982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:51.485 [2024-11-06 14:33:38.295004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:13576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.485 [2024-11-06 14:33:38.295016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:51.485 [2024-11-06 14:33:38.295039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:13584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.485 [2024-11-06 14:33:38.295051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:51.485 [2024-11-06 14:33:38.295073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:13592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.485 [2024-11-06 14:33:38.295086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:51.485 [2024-11-06 14:33:38.295108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.485 [2024-11-06 14:33:38.295120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:51.485 [2024-11-06 14:33:38.295142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.485 [2024-11-06 14:33:38.295154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:51.485 [2024-11-06 14:33:38.295176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:13976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.485 [2024-11-06 14:33:38.295188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:51.485 [2024-11-06 14:33:38.295211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:13984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.485 [2024-11-06 14:33:38.295223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:51.485 [2024-11-06 14:33:38.295245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:13992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.485 [2024-11-06 14:33:38.295257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:51.485 [2024-11-06 14:33:38.295278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.485 [2024-11-06 14:33:38.295290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:51.485 [2024-11-06 14:33:38.295312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:14008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.485 [2024-11-06 14:33:38.295324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:51.485 [2024-11-06 14:33:38.295345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.485 [2024-11-06 14:33:38.295362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:51.485 [2024-11-06 14:33:38.295384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:14024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.485 [2024-11-06 14:33:38.295396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:51.485 [2024-11-06 14:33:38.295417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:14032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.485 [2024-11-06 14:33:38.295434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:51.485 [2024-11-06 14:33:38.295456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:14040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.485 [2024-11-06 14:33:38.295469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:51.485 [2024-11-06 14:33:38.295490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:14048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.485 [2024-11-06 14:33:38.295503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:51.485 [2024-11-06 14:33:38.295525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:14056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.485 [2024-11-06 14:33:38.295537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:51.485 [2024-11-06 14:33:38.295558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:14064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.485 [2024-11-06 14:33:38.295572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:51.485 [2024-11-06 14:33:38.295593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.485 [2024-11-06 14:33:38.295606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.485 [2024-11-06 14:33:38.295627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:14080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.485 [2024-11-06 14:33:38.295640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:51.485 [2024-11-06 14:33:38.295660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:14088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.485 [2024-11-06 14:33:38.295673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:51.485 [2024-11-06 14:33:38.295694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:14096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.485 [2024-11-06 14:33:38.295706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:51.485 [2024-11-06 14:33:38.295727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.485 [2024-11-06 14:33:38.295740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:51.485 [2024-11-06 14:33:38.295763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:14112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.485 [2024-11-06 14:33:38.295775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:51.485 [2024-11-06 14:33:38.295804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:14120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.485 [2024-11-06 14:33:38.295817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:51.485 [2024-11-06 14:33:38.295838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:14128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.485 [2024-11-06 14:33:38.295851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:51.485 [2024-11-06 14:33:38.295872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:14136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.485 [2024-11-06 14:33:38.295885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:51.485 [2024-11-06 14:33:38.295906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.485 [2024-11-06 14:33:38.295919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:51.485 [2024-11-06 14:33:38.295941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:13608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.485 [2024-11-06 14:33:38.295963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:51.485 [2024-11-06 14:33:38.295984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:13616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.485 [2024-11-06 14:33:38.295997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:51.485 [2024-11-06 14:33:38.296019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:13624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.485 [2024-11-06 14:33:38.296031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:51.485 [2024-11-06 14:33:38.296053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:13632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.485 [2024-11-06 14:33:38.296065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:51.485 [2024-11-06 14:33:38.296086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.485 [2024-11-06 14:33:38.296098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:51.485 [2024-11-06 14:33:38.296120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:13648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.485 [2024-11-06 14:33:38.296133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:51.485 [2024-11-06 14:33:38.296320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:13656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.485 [2024-11-06 14:33:38.296337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:51.485 [2024-11-06 14:33:38.296367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:13664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.486 [2024-11-06 14:33:38.296380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:51.486 [2024-11-06 14:33:38.296412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.486 [2024-11-06 14:33:38.296425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:51.486 [2024-11-06 14:33:38.296452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:13680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.486 [2024-11-06 14:33:38.296464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:51.486 [2024-11-06 14:33:38.296490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:13688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.486 [2024-11-06 14:33:38.296502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:51.486 [2024-11-06 14:33:38.296528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:13696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.486 [2024-11-06 14:33:38.296540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:51.486 [2024-11-06 14:33:38.296566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:13704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.486 [2024-11-06 14:33:38.296578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:51.486 [2024-11-06 14:33:38.296604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:13712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.486 [2024-11-06 14:33:38.296616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:51.486 [2024-11-06 14:33:38.296642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:13720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.486 [2024-11-06 14:33:38.296654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:51.486 [2024-11-06 14:33:38.296678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:14152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.486 [2024-11-06 14:33:38.296691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:51.486 [2024-11-06 14:33:38.296716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:14160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.486 [2024-11-06 14:33:38.296728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:51.486 [2024-11-06 14:33:38.296753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:14168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.486 [2024-11-06 14:33:38.296765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:51.486 [2024-11-06 14:33:38.296791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:14176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.486 [2024-11-06 14:33:38.296803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:51.486 [2024-11-06 14:33:38.296829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:14184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.486 [2024-11-06 14:33:38.296841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:51.486 [2024-11-06 14:33:38.296867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:14192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.486 [2024-11-06 14:33:38.296883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.486 [2024-11-06 14:33:38.296908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:14200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.486 [2024-11-06 14:33:38.296921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.486 [2024-11-06 14:33:38.296946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:14208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.486 [2024-11-06 14:33:38.296972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.486 [2024-11-06 14:33:38.296997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:14216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.486 [2024-11-06 14:33:38.297010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:51.486 [2024-11-06 14:33:38.297036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:14224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.486 [2024-11-06 14:33:38.297048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:51.486 [2024-11-06 14:33:38.297073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:14232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.486 [2024-11-06 14:33:38.297086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:51.486 [2024-11-06 14:33:38.297112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:14240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.486 [2024-11-06 14:33:38.297131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:51.486 [2024-11-06 14:33:38.297156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:14248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.486 [2024-11-06 14:33:38.297168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:51.486 [2024-11-06 14:33:38.297194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:14256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.486 [2024-11-06 14:33:38.297207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:51.486 [2024-11-06 14:33:38.297232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:14264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.486 [2024-11-06 14:33:38.297245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:51.486 [2024-11-06 14:33:38.297270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.486 [2024-11-06 14:33:38.297282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:51.486 [2024-11-06 14:33:38.297307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:14280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.486 [2024-11-06 14:33:38.297319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:51.486 [2024-11-06 14:33:38.297344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:14288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.486 [2024-11-06 14:33:38.297362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:51.486 [2024-11-06 14:33:38.297387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:14296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.486 [2024-11-06 14:33:38.297399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:51.486 [2024-11-06 14:33:38.297425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:14304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.486 [2024-11-06 14:33:38.297437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:51.486 [2024-11-06 14:33:38.297462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:14312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.486 [2024-11-06 14:33:38.297474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:51.486 [2024-11-06 14:33:38.297499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:14320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.486 [2024-11-06 14:33:38.297511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:51.486 [2024-11-06 14:33:38.297536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:14328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.486 [2024-11-06 14:33:38.297549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:51.486 [2024-11-06 14:33:38.297575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:14336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.486 [2024-11-06 14:33:38.297587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:51.486 [2024-11-06 14:33:38.297612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.486 [2024-11-06 14:33:38.297624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:51.486 [2024-11-06 14:33:38.297649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:14352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.486 [2024-11-06 14:33:38.297662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:51.486 [2024-11-06 14:33:38.297687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.486 [2024-11-06 14:33:38.297699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:51.486 [2024-11-06 14:33:38.297724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:14368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.486 [2024-11-06 14:33:38.297737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:51.486 [2024-11-06 14:33:38.297763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:14376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.486 [2024-11-06 14:33:38.297775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:51.486 [2024-11-06 14:33:38.297801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:14384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.486 [2024-11-06 14:33:38.297813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:51.486 [2024-11-06 14:33:38.297842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:14392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.486 [2024-11-06 14:33:38.297855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:51.486 [2024-11-06 14:33:38.297880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:14400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.487 [2024-11-06 14:33:38.297893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:51.487 11034.20 IOPS, 43.10 MiB/s [2024-11-06T14:34:19.122Z] 10764.19 IOPS, 42.05 MiB/s [2024-11-06T14:34:19.122Z] 10815.76 IOPS, 42.25 MiB/s [2024-11-06T14:34:19.122Z] 10853.28 IOPS, 42.40 MiB/s [2024-11-06T14:34:19.122Z] 10893.63 IOPS, 42.55 MiB/s [2024-11-06T14:34:19.122Z] 10917.60 IOPS, 42.65 MiB/s [2024-11-06T14:34:19.122Z] 10949.62 IOPS, 42.77 MiB/s [2024-11-06T14:34:19.122Z] [2024-11-06 14:33:45.133910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:26864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.487 [2024-11-06 14:33:45.133972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:51.487 [2024-11-06 14:33:45.134022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:27136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.487 [2024-11-06 14:33:45.134035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:51.487 [2024-11-06 14:33:45.134051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:27144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.487 [2024-11-06 14:33:45.134061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:51.487 [2024-11-06 14:33:45.134077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:27152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.487 [2024-11-06 14:33:45.134087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:51.487 [2024-11-06 14:33:45.134102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:27160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.487 [2024-11-06 14:33:45.134111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:51.487 [2024-11-06 14:33:45.134126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:27168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.487 [2024-11-06 14:33:45.134136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:51.487 [2024-11-06 14:33:45.134150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:27176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.487 [2024-11-06 14:33:45.134160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:51.487 [2024-11-06 14:33:45.134174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:27184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.487 [2024-11-06 14:33:45.134183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:51.487 [2024-11-06 14:33:45.134200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:27192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.487 [2024-11-06 14:33:45.134209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:51.487 [2024-11-06 14:33:45.134224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:27200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.487 [2024-11-06 14:33:45.134250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:51.487 [2024-11-06 14:33:45.134266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:27208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.487 [2024-11-06 14:33:45.134276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:51.487 [2024-11-06 14:33:45.134291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:27216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.487 [2024-11-06 14:33:45.134300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:51.487 [2024-11-06 14:33:45.134316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:27224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.487 [2024-11-06 14:33:45.134325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:51.487 [2024-11-06 14:33:45.134340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:27232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.487 [2024-11-06 14:33:45.134349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:51.487 [2024-11-06 14:33:45.134364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:27240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.487 [2024-11-06 14:33:45.134374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:51.487 [2024-11-06 14:33:45.134389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:27248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.487 [2024-11-06 14:33:45.134398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:51.487 [2024-11-06 14:33:45.134413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:27256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.487 [2024-11-06 14:33:45.134423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:51.487 [2024-11-06 14:33:45.134438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:27264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.487 [2024-11-06 14:33:45.134447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:51.487 [2024-11-06 14:33:45.134463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:27272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.487 [2024-11-06 14:33:45.134472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:51.487 [2024-11-06 14:33:45.134487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:27280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.487 [2024-11-06 14:33:45.134496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:51.487 [2024-11-06 14:33:45.134511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:27288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.487 [2024-11-06 14:33:45.134520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:51.487 [2024-11-06 14:33:45.134535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:27296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.487 [2024-11-06 14:33:45.134549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:51.487 [2024-11-06 14:33:45.134565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:27304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.487 [2024-11-06 14:33:45.134575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:51.487 [2024-11-06 14:33:45.134590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:27312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.487 [2024-11-06 14:33:45.134600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:51.487 [2024-11-06 14:33:45.136054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:27320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.487 [2024-11-06 14:33:45.136080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:51.487 [2024-11-06 14:33:45.136104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:27328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.487 [2024-11-06 14:33:45.136114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.487 [2024-11-06 14:33:45.136133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:27336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.487 [2024-11-06 14:33:45.136143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:51.487 [2024-11-06 14:33:45.136163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:27344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.487 [2024-11-06 14:33:45.136173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:51.487 [2024-11-06 14:33:45.136193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:27352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.487 [2024-11-06 14:33:45.136202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:51.487 [2024-11-06 14:33:45.136221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:27360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.487 [2024-11-06 14:33:45.136231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:51.487 [2024-11-06 14:33:45.136250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.487 [2024-11-06 14:33:45.136260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:51.487 [2024-11-06 14:33:45.136279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.487 [2024-11-06 14:33:45.136304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:51.487 [2024-11-06 14:33:45.136406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:27384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.487 [2024-11-06 14:33:45.136421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:51.487 [2024-11-06 14:33:45.136442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:27392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.487 [2024-11-06 14:33:45.136452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:51.487 [2024-11-06 14:33:45.136487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.487 [2024-11-06 14:33:45.136498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:51.487 [2024-11-06 14:33:45.136518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:27408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.487 [2024-11-06 14:33:45.136528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:51.487 [2024-11-06 14:33:45.136548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:27416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.487 [2024-11-06 14:33:45.136557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:51.487 [2024-11-06 14:33:45.136577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:27424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.488 [2024-11-06 14:33:45.136587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:51.488 [2024-11-06 14:33:45.136607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:27432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.488 [2024-11-06 14:33:45.136617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:51.488 [2024-11-06 14:33:45.136636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:27440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.488 [2024-11-06 14:33:45.136646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:51.488 [2024-11-06 14:33:45.136666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:26872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.488 [2024-11-06 14:33:45.136675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:51.488 [2024-11-06 14:33:45.136695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:26880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.488 [2024-11-06 14:33:45.136705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:51.488 [2024-11-06 14:33:45.136724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:26888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.488 [2024-11-06 14:33:45.136734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:51.488 [2024-11-06 14:33:45.136753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:26896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.488 [2024-11-06 14:33:45.136771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:51.488 [2024-11-06 14:33:45.136791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:26904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.488 [2024-11-06 14:33:45.136801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:51.488 [2024-11-06 14:33:45.136820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:26912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.488 [2024-11-06 14:33:45.136830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:51.488 [2024-11-06 14:33:45.136854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:26920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.488 [2024-11-06 14:33:45.136864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:51.488 [2024-11-06 14:33:45.136883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:26928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.488 [2024-11-06 14:33:45.136893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:51.488 [2024-11-06 14:33:45.136913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:26936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.488 [2024-11-06 14:33:45.136922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:51.488 [2024-11-06 14:33:45.136942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:26944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.488 [2024-11-06 14:33:45.136962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:51.488 [2024-11-06 14:33:45.136982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:26952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.488 [2024-11-06 14:33:45.136991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:51.488 [2024-11-06 14:33:45.137011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:26960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.488 [2024-11-06 14:33:45.137021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:51.488 [2024-11-06 14:33:45.137040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:26968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.488 [2024-11-06 14:33:45.137049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:51.488 [2024-11-06 14:33:45.137069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:26976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.488 [2024-11-06 14:33:45.137079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:51.488 [2024-11-06 14:33:45.137098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:26984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.488 [2024-11-06 14:33:45.137108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:51.488 [2024-11-06 14:33:45.137127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:26992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.488 [2024-11-06 14:33:45.137137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.488 [2024-11-06 14:33:45.137157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.488 [2024-11-06 14:33:45.137167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.488 [2024-11-06 14:33:45.137186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:27008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.488 [2024-11-06 14:33:45.137196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.488 [2024-11-06 14:33:45.137215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:27016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.488 [2024-11-06 14:33:45.137229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:51.488 [2024-11-06 14:33:45.137249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:27024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.488 [2024-11-06 14:33:45.137259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:51.488 [2024-11-06 14:33:45.137278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:27032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.488 [2024-11-06 14:33:45.137288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:51.488 [2024-11-06 14:33:45.137308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:27040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.488 [2024-11-06 14:33:45.137317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:51.488 [2024-11-06 14:33:45.137337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:27048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.488 [2024-11-06 14:33:45.137347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:51.488 [2024-11-06 14:33:45.137367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:27056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.488 [2024-11-06 14:33:45.137377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:51.488 [2024-11-06 14:33:45.137396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:27064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.488 [2024-11-06 14:33:45.137405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:51.488 [2024-11-06 14:33:45.137425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:27072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.488 [2024-11-06 14:33:45.137434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:51.488 [2024-11-06 14:33:45.137453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:27080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.488 [2024-11-06 14:33:45.137463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:51.488 [2024-11-06 14:33:45.137482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:27088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.488 [2024-11-06 14:33:45.137492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:51.488 [2024-11-06 14:33:45.137511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:27096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.488 [2024-11-06 14:33:45.137521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:51.488 [2024-11-06 14:33:45.137541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:27104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.488 [2024-11-06 14:33:45.137550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:51.488 [2024-11-06 14:33:45.137570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:27112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.488 [2024-11-06 14:33:45.137583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:51.488 [2024-11-06 14:33:45.137602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:27120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.488 [2024-11-06 14:33:45.137612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:51.488 [2024-11-06 14:33:45.137632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:27128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.488 [2024-11-06 14:33:45.137641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:51.488 10578.82 IOPS, 41.32 MiB/s [2024-11-06T14:34:19.123Z] 10118.87 IOPS, 39.53 MiB/s [2024-11-06T14:34:19.123Z] 9697.25 IOPS, 37.88 MiB/s [2024-11-06T14:34:19.123Z] 9309.36 IOPS, 36.36 MiB/s [2024-11-06T14:34:19.123Z] 8951.31 IOPS, 34.97 MiB/s [2024-11-06T14:34:19.123Z] 8619.78 IOPS, 33.67 MiB/s [2024-11-06T14:34:19.123Z] 8311.93 IOPS, 32.47 MiB/s [2024-11-06T14:34:19.123Z] 8323.52 IOPS, 32.51 MiB/s [2024-11-06T14:34:19.123Z] 8419.63 IOPS, 32.89 MiB/s [2024-11-06T14:34:19.123Z] 8517.90 IOPS, 33.27 MiB/s [2024-11-06T14:34:19.123Z] 8612.97 IOPS, 33.64 MiB/s [2024-11-06T14:34:19.123Z] 8703.94 IOPS, 34.00 MiB/s [2024-11-06T14:34:19.123Z] 8783.85 IOPS, 34.31 MiB/s [2024-11-06T14:34:19.124Z] [2024-11-06 14:33:58.181783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:56752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.489 [2024-11-06 14:33:58.181835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:51.489 [2024-11-06 14:33:58.181882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:56760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.489 [2024-11-06 14:33:58.181896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:51.489 [2024-11-06 14:33:58.181912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:56768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.489 [2024-11-06 14:33:58.181923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:51.489 [2024-11-06 14:33:58.181938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:56776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.489 [2024-11-06 14:33:58.181957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:51.489 [2024-11-06 14:33:58.181973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:56784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.489 [2024-11-06 14:33:58.181982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:51.489 [2024-11-06 14:33:58.181998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:56792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.489 [2024-11-06 14:33:58.182008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:51.489 [2024-11-06 14:33:58.182023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:56800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.489 [2024-11-06 14:33:58.182032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:51.489 [2024-11-06 14:33:58.182047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:56808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.489 [2024-11-06 14:33:58.182056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:51.489 [2024-11-06 14:33:58.182072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:56816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.489 [2024-11-06 14:33:58.182096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:51.489 [2024-11-06 14:33:58.182112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:56824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.489 [2024-11-06 14:33:58.182121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:51.489 [2024-11-06 14:33:58.182136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.489 [2024-11-06 14:33:58.182146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:51.489 [2024-11-06 14:33:58.182161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:56120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.489 [2024-11-06 14:33:58.182171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:51.489 [2024-11-06 14:33:58.182186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:56128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.489 [2024-11-06 14:33:58.182195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:51.489 [2024-11-06 14:33:58.182210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:56136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.489 [2024-11-06 14:33:58.182220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:51.489 [2024-11-06 14:33:58.182235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:56144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.489 [2024-11-06 14:33:58.182244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:51.489 [2024-11-06 14:33:58.182259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:56152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.489 [2024-11-06 14:33:58.182268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:51.489 [2024-11-06 14:33:58.182284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:56160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.489 [2024-11-06 14:33:58.182293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:51.489 [2024-11-06 14:33:58.182308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:56168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.489 [2024-11-06 14:33:58.182317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.489 [2024-11-06 14:33:58.182333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:56176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.489 [2024-11-06 14:33:58.182343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:51.489 [2024-11-06 14:33:58.182358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:56184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.489 [2024-11-06 14:33:58.182367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:51.489 [2024-11-06 14:33:58.182382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:56192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.489 [2024-11-06 14:33:58.182392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:51.489 [2024-11-06 14:33:58.182415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:56200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.489 [2024-11-06 14:33:58.182425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:51.489 [2024-11-06 14:33:58.182441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:56208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.489 [2024-11-06 14:33:58.182450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:51.489 [2024-11-06 14:33:58.182470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:56216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.489 [2024-11-06 14:33:58.182480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:51.489 [2024-11-06 14:33:58.182496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:56224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.489 [2024-11-06 14:33:58.182505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:51.489 [2024-11-06 14:33:58.182521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:56232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.489 [2024-11-06 14:33:58.182530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:51.489 [2024-11-06 14:33:58.182546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:56240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.489 [2024-11-06 14:33:58.183016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:51.489 [2024-11-06 14:33:58.183300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:56248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.489 [2024-11-06 14:33:58.183319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.489 [2024-11-06 14:33:58.183341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:56256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.489 [2024-11-06 14:33:58.183350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.489 [2024-11-06 14:33:58.183361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:56264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.489 [2024-11-06 14:33:58.183370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.489 [2024-11-06 14:33:58.183381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:56272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.489 [2024-11-06 14:33:58.183389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.489 [2024-11-06 14:33:58.183401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:56280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.489 [2024-11-06 14:33:58.183410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.489 [2024-11-06 14:33:58.183420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:56288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.489 [2024-11-06 14:33:58.183429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.489 [2024-11-06 14:33:58.183446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:56296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.489 [2024-11-06 14:33:58.183455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.489 [2024-11-06 14:33:58.183466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:56840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.489 [2024-11-06 14:33:58.183480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.489 [2024-11-06 14:33:58.183491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:56304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.489 [2024-11-06 14:33:58.183500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.490 [2024-11-06 14:33:58.183510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:56312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.490 [2024-11-06 14:33:58.183519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.490 [2024-11-06 14:33:58.183529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:56320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.490 [2024-11-06 14:33:58.183538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.490 [2024-11-06 14:33:58.183548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:56328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.490 [2024-11-06 14:33:58.183557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.490 [2024-11-06 14:33:58.183568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:56336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.490 [2024-11-06 14:33:58.183576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.490 [2024-11-06 14:33:58.183586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:56344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.490 [2024-11-06 14:33:58.183595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.490 [2024-11-06 14:33:58.183605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:56352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.490 [2024-11-06 14:33:58.183614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.490 [2024-11-06 14:33:58.183624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:56360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.490 [2024-11-06 14:33:58.183633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.490 [2024-11-06 14:33:58.183643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:56368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.490 [2024-11-06 14:33:58.183652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.490 [2024-11-06 14:33:58.183662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:56376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.490 [2024-11-06 14:33:58.183671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.490 [2024-11-06 14:33:58.183681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:56384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.490 [2024-11-06 14:33:58.183690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.490 [2024-11-06 14:33:58.183703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:56392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.490 [2024-11-06 14:33:58.183712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.490 [2024-11-06 14:33:58.183723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:56400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.490 [2024-11-06 14:33:58.183731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.490 [2024-11-06 14:33:58.183742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:56408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.490 [2024-11-06 14:33:58.183750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.490 [2024-11-06 14:33:58.183761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:56416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.490 [2024-11-06 14:33:58.183770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.490 [2024-11-06 14:33:58.183780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:56424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.490 [2024-11-06 14:33:58.183789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.490 [2024-11-06 14:33:58.183799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:56432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.490 [2024-11-06 14:33:58.183808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.490 [2024-11-06 14:33:58.183818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:56440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.490 [2024-11-06 14:33:58.183827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.490 [2024-11-06 14:33:58.183837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:56448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.490 [2024-11-06 14:33:58.183846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.490 [2024-11-06 14:33:58.183856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:56456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.490 [2024-11-06 14:33:58.183864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.490 [2024-11-06 14:33:58.183875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:56464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.490 [2024-11-06 14:33:58.183883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.490 [2024-11-06 14:33:58.183893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:56472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.490 [2024-11-06 14:33:58.183902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.490 [2024-11-06 14:33:58.183912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:56480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.490 [2024-11-06 14:33:58.183921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.490 [2024-11-06 14:33:58.183931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:56488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.490 [2024-11-06 14:33:58.183943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.490 [2024-11-06 14:33:58.183965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:56496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.490 [2024-11-06 14:33:58.183974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.490 [2024-11-06 14:33:58.183984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:56504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.490 [2024-11-06 14:33:58.183993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.490 [2024-11-06 14:33:58.184003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:56512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.490 [2024-11-06 14:33:58.184012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.490 [2024-11-06 14:33:58.184022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:56520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.490 [2024-11-06 14:33:58.184030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.490 [2024-11-06 14:33:58.184041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:56528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.490 [2024-11-06 14:33:58.184050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.490 [2024-11-06 14:33:58.184061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:56536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.490 [2024-11-06 14:33:58.184070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.490 [2024-11-06 14:33:58.184080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:56544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.490 [2024-11-06 14:33:58.184089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.490 [2024-11-06 14:33:58.184100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:56552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.490 [2024-11-06 14:33:58.184108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.490 [2024-11-06 14:33:58.184118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:56560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.490 [2024-11-06 14:33:58.184127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.490 [2024-11-06 14:33:58.184138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:56568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.490 [2024-11-06 14:33:58.184146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.490 [2024-11-06 14:33:58.184156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:56576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.490 [2024-11-06 14:33:58.184165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.490 [2024-11-06 14:33:58.184175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:56584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.490 [2024-11-06 14:33:58.184184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.490 [2024-11-06 14:33:58.184198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:56592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.490 [2024-11-06 14:33:58.184207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.490 [2024-11-06 14:33:58.184217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:56600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.490 [2024-11-06 14:33:58.184226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.490 [2024-11-06 14:33:58.184236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:56608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.490 [2024-11-06 14:33:58.184245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.490 [2024-11-06 14:33:58.184255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:56616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.490 [2024-11-06 14:33:58.184263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.490 [2024-11-06 14:33:58.184273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:56624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.490 [2024-11-06 14:33:58.184282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.491 [2024-11-06 14:33:58.184300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:56632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.491 [2024-11-06 14:33:58.184309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.491 [2024-11-06 14:33:58.184320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:56640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.491 [2024-11-06 14:33:58.184329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.491 [2024-11-06 14:33:58.184339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:56648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.491 [2024-11-06 14:33:58.184348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.491 [2024-11-06 14:33:58.184358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:56656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.491 [2024-11-06 14:33:58.184367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.491 [2024-11-06 14:33:58.184377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:56664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.491 [2024-11-06 14:33:58.184386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.491 [2024-11-06 14:33:58.184396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:56672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.491 [2024-11-06 14:33:58.184405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.491 [2024-11-06 14:33:58.184415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:56680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.491 [2024-11-06 14:33:58.184424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.491 [2024-11-06 14:33:58.184434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:56688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.491 [2024-11-06 14:33:58.184446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.491 [2024-11-06 14:33:58.184456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:56696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.491 [2024-11-06 14:33:58.184465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.491 [2024-11-06 14:33:58.184475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:56704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.491 [2024-11-06 14:33:58.184484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.491 [2024-11-06 14:33:58.184494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:56712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.491 [2024-11-06 14:33:58.184502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.491 [2024-11-06 14:33:58.184513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:56720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.491 [2024-11-06 14:33:58.184521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.491 [2024-11-06 14:33:58.184532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:56728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.491 [2024-11-06 14:33:58.184541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.491 [2024-11-06 14:33:58.184551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:56736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.491 [2024-11-06 14:33:58.184560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.491 [2024-11-06 14:33:58.184570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:56744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.491 [2024-11-06 14:33:58.184578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.491 [2024-11-06 14:33:58.184589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:56848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.491 [2024-11-06 14:33:58.184597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.491 [2024-11-06 14:33:58.184608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:56856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.491 [2024-11-06 14:33:58.184619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.491 [2024-11-06 14:33:58.184629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:56864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.491 [2024-11-06 14:33:58.184638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.491 [2024-11-06 14:33:58.184649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:56872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.491 [2024-11-06 14:33:58.184657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.491 [2024-11-06 14:33:58.184667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:56880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.491 [2024-11-06 14:33:58.184676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.491 [2024-11-06 14:33:58.184689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:56888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.491 [2024-11-06 14:33:58.184698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.491 [2024-11-06 14:33:58.184708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:56896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.491 [2024-11-06 14:33:58.184717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.491 [2024-11-06 14:33:58.184727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:56904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.491 [2024-11-06 14:33:58.184740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.491 [2024-11-06 14:33:58.184750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:56912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.491 [2024-11-06 14:33:58.184759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.491 [2024-11-06 14:33:58.184769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:56920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.491 [2024-11-06 14:33:58.184778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.491 [2024-11-06 14:33:58.184788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:56928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.491 [2024-11-06 14:33:58.184796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.491 [2024-11-06 14:33:58.184807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:56936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.491 [2024-11-06 14:33:58.184815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.491 [2024-11-06 14:33:58.184825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:56944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.491 [2024-11-06 14:33:58.184834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.491 [2024-11-06 14:33:58.184844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:56952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.491 [2024-11-06 14:33:58.184853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.491 [2024-11-06 14:33:58.184863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:56960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.491 [2024-11-06 14:33:58.184872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.491 [2024-11-06 14:33:58.184881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:56968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.491 [2024-11-06 14:33:58.184890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.491 [2024-11-06 14:33:58.184900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:56976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.491 [2024-11-06 14:33:58.184909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.491 [2024-11-06 14:33:58.184919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:56984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.491 [2024-11-06 14:33:58.184929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.491 [2024-11-06 14:33:58.184942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:56992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.491 [2024-11-06 14:33:58.184961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.491 [2024-11-06 14:33:58.184971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:57000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.491 [2024-11-06 14:33:58.184980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.491 [2024-11-06 14:33:58.184990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:57008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.491 [2024-11-06 14:33:58.184999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.491 [2024-11-06 14:33:58.185009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:57016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.491 [2024-11-06 14:33:58.185018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.491 [2024-11-06 14:33:58.185028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:57024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.491 [2024-11-06 14:33:58.185037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.491 [2024-11-06 14:33:58.185047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:57032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.491 [2024-11-06 14:33:58.185056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.491 [2024-11-06 14:33:58.185066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:57040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.491 [2024-11-06 14:33:58.185075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.491 [2024-11-06 14:33:58.185085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:57048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.492 [2024-11-06 14:33:58.185094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.492 [2024-11-06 14:33:58.185104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:57056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.492 [2024-11-06 14:33:58.185113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.492 [2024-11-06 14:33:58.185124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:57064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.492 [2024-11-06 14:33:58.185132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.492 [2024-11-06 14:33:58.185142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:57072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.492 [2024-11-06 14:33:58.185151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.492 [2024-11-06 14:33:58.185161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:57080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.492 [2024-11-06 14:33:58.185170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.492 [2024-11-06 14:33:58.185180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:57088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.492 [2024-11-06 14:33:58.185192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.492 [2024-11-06 14:33:58.185202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:57096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.492 [2024-11-06 14:33:58.185211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.492 [2024-11-06 14:33:58.185221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:57104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.492 [2024-11-06 14:33:58.185230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.492 [2024-11-06 14:33:58.185240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:57112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.492 [2024-11-06 14:33:58.185251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.492 [2024-11-06 14:33:58.185261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:57120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.492 [2024-11-06 14:33:58.185271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.492 [2024-11-06 14:33:58.185281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:57128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.492 [2024-11-06 14:33:58.185289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.492 [2024-11-06 14:33:58.185300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:57136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.492 [2024-11-06 14:33:58.185308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.492 [2024-11-06 14:33:58.185779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.492 [2024-11-06 14:33:58.185801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.492 [2024-11-06 14:33:58.185812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.492 [2024-11-06 14:33:58.185821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.492 [2024-11-06 14:33:58.185831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.492 [2024-11-06 14:33:58.185840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.492 [2024-11-06 14:33:58.185849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.492 [2024-11-06 14:33:58.185858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.492 [2024-11-06 14:33:58.185868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.492 [2024-11-06 14:33:58.185878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.492 [2024-11-06 14:33:58.185893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b000 is same with the state(6) to be set 00:29:51.492 [2024-11-06 14:33:58.186821] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.492 [2024-11-06 14:33:58.186862] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b000 (9): Bad file descriptor 00:29:51.492 [2024-11-06 14:33:58.186943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.492 [2024-11-06 14:33:58.186973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002b000 with addr=10.0.0.3, port=4421 00:29:51.492 [2024-11-06 14:33:58.186988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b000 is same with the state(6) to be set 00:29:51.492 [2024-11-06 14:33:58.187004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b000 (9): Bad file descriptor 00:29:51.492 [2024-11-06 14:33:58.187020] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.492 [2024-11-06 14:33:58.187030] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.492 [2024-11-06 14:33:58.187040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.492 [2024-11-06 14:33:58.187050] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.492 [2024-11-06 14:33:58.187060] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.492 8852.00 IOPS, 34.58 MiB/s [2024-11-06T14:34:19.127Z] 8900.42 IOPS, 34.77 MiB/s [2024-11-06T14:34:19.127Z] 8958.54 IOPS, 34.99 MiB/s [2024-11-06T14:34:19.127Z] 9008.47 IOPS, 35.19 MiB/s [2024-11-06T14:34:19.127Z] 9057.15 IOPS, 35.38 MiB/s [2024-11-06T14:34:19.127Z] 9104.67 IOPS, 35.57 MiB/s [2024-11-06T14:34:19.127Z] 9145.93 IOPS, 35.73 MiB/s [2024-11-06T14:34:19.127Z] 9189.33 IOPS, 35.90 MiB/s [2024-11-06T14:34:19.127Z] 9228.93 IOPS, 36.05 MiB/s [2024-11-06T14:34:19.127Z] 9267.30 IOPS, 36.20 MiB/s [2024-11-06T14:34:19.127Z] [2024-11-06 14:34:08.266304] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:29:51.492 9315.22 IOPS, 36.39 MiB/s [2024-11-06T14:34:19.127Z] 9353.70 IOPS, 36.54 MiB/s [2024-11-06T14:34:19.127Z] 9393.00 IOPS, 36.69 MiB/s [2024-11-06T14:34:19.127Z] 9402.88 IOPS, 36.73 MiB/s [2024-11-06T14:34:19.127Z] 9372.08 IOPS, 36.61 MiB/s [2024-11-06T14:34:19.127Z] 9391.00 IOPS, 36.68 MiB/s [2024-11-06T14:34:19.127Z] 9426.43 IOPS, 36.82 MiB/s [2024-11-06T14:34:19.127Z] 9465.56 IOPS, 36.97 MiB/s [2024-11-06T14:34:19.127Z] 9502.94 IOPS, 37.12 MiB/s [2024-11-06T14:34:19.127Z] 9530.54 IOPS, 37.23 MiB/s [2024-11-06T14:34:19.127Z] Received shutdown signal, test time was about 54.236924 seconds 00:29:51.492 00:29:51.492 Latency(us) 00:29:51.492 [2024-11-06T14:34:19.127Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:51.492 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:29:51.492 Verification LBA range: start 0x0 length 0x4000 00:29:51.492 Nvme0n1 : 54.24 9532.90 37.24 0.00 0.00 13400.97 623.85 7020619.62 00:29:51.492 [2024-11-06T14:34:19.127Z] =================================================================================================================== 00:29:51.492 [2024-11-06T14:34:19.127Z] Total : 9532.90 37.24 0.00 0.00 13400.97 623.85 7020619.62 00:29:51.492 14:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:51.492 14:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:29:51.492 14:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:29:51.492 14:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:29:51.492 14:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:51.492 14:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:29:51.492 14:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:51.492 14:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:29:51.492 14:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:51.492 14:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:51.492 rmmod nvme_tcp 00:29:51.492 rmmod nvme_fabrics 00:29:51.492 rmmod nvme_keyring 00:29:51.751 14:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:51.751 14:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:29:51.751 14:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:29:51.751 14:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@517 -- # '[' -n 104756 ']' 00:29:51.751 14:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # killprocess 104756 00:29:51.751 14:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@952 -- # '[' -z 104756 ']' 00:29:51.751 14:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # kill -0 104756 00:29:51.751 14:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@957 -- # uname 00:29:51.751 14:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:51.751 14:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 104756 00:29:51.751 14:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:51.751 14:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:51.751 killing process with pid 104756 00:29:51.751 14:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@970 -- # echo 'killing process with pid 104756' 00:29:51.751 14:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@971 -- # kill 104756 00:29:51.751 14:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@976 -- # wait 104756 00:29:52.318 14:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:52.318 14:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:52.318 14:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:52.318 14:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:29:52.318 14:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:52.318 14:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-save 00:29:52.318 14:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:29:52.318 14:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:52.318 14:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:29:52.318 14:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:29:52.318 14:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:29:52.318 14:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:29:52.318 14:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:29:52.318 14:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:29:52.318 14:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:29:52.318 14:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:29:52.318 14:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:29:52.318 14:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:29:52.577 14:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:29:52.577 14:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:29:52.577 14:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:52.577 14:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:52.577 14:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:29:52.577 14:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:52.577 14:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:52.577 14:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:52.577 14:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:29:52.577 00:29:52.577 real 1m0.719s 00:29:52.577 user 2m53.470s 00:29:52.577 sys 0m9.423s 00:29:52.577 14:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:52.577 ************************************ 00:29:52.577 END TEST nvmf_host_multipath 00:29:52.577 14:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:29:52.577 ************************************ 00:29:52.577 14:34:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:29:52.577 14:34:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:29:52.577 14:34:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:52.577 14:34:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.577 ************************************ 00:29:52.577 START TEST nvmf_timeout 00:29:52.577 ************************************ 00:29:52.577 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:29:52.577 * Looking for test storage... 00:29:52.577 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:29:52.577 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:52.577 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1691 -- # lcov --version 00:29:52.577 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:52.837 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:52.837 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:52.837 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:52.837 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:52.837 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:29:52.837 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:29:52.837 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:29:52.837 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:29:52.837 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:29:52.837 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:29:52.837 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:29:52.837 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:52.837 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:29:52.837 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:29:52.837 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:52.837 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:52.837 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:29:52.837 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:29:52.837 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:52.837 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:29:52.837 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:29:52.837 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:29:52.837 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:29:52.837 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:52.837 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:29:52.837 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:29:52.837 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:52.837 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:52.837 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:29:52.837 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:52.837 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:52.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.838 --rc genhtml_branch_coverage=1 00:29:52.838 --rc genhtml_function_coverage=1 00:29:52.838 --rc genhtml_legend=1 00:29:52.838 --rc geninfo_all_blocks=1 00:29:52.838 --rc geninfo_unexecuted_blocks=1 00:29:52.838 00:29:52.838 ' 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:52.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.838 --rc genhtml_branch_coverage=1 00:29:52.838 --rc genhtml_function_coverage=1 00:29:52.838 --rc genhtml_legend=1 00:29:52.838 --rc geninfo_all_blocks=1 00:29:52.838 --rc geninfo_unexecuted_blocks=1 00:29:52.838 00:29:52.838 ' 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:52.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.838 --rc genhtml_branch_coverage=1 00:29:52.838 --rc genhtml_function_coverage=1 00:29:52.838 --rc genhtml_legend=1 00:29:52.838 --rc geninfo_all_blocks=1 00:29:52.838 --rc geninfo_unexecuted_blocks=1 00:29:52.838 00:29:52.838 ' 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:52.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.838 --rc genhtml_branch_coverage=1 00:29:52.838 --rc genhtml_function_coverage=1 00:29:52.838 --rc genhtml_legend=1 00:29:52.838 --rc geninfo_all_blocks=1 00:29:52.838 --rc geninfo_unexecuted_blocks=1 00:29:52.838 00:29:52.838 ' 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:52.838 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:29:52.838 Cannot find device "nvmf_init_br" 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:29:52.838 Cannot find device "nvmf_init_br2" 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:29:52.838 Cannot find device "nvmf_tgt_br" 00:29:52.838 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:29:52.839 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:29:52.839 Cannot find device "nvmf_tgt_br2" 00:29:52.839 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:29:52.839 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:29:52.839 Cannot find device "nvmf_init_br" 00:29:52.839 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:29:52.839 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:29:52.839 Cannot find device "nvmf_init_br2" 00:29:52.839 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:29:52.839 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:29:52.839 Cannot find device "nvmf_tgt_br" 00:29:52.839 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:29:52.839 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:29:52.839 Cannot find device "nvmf_tgt_br2" 00:29:52.839 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:29:52.839 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:29:52.839 Cannot find device "nvmf_br" 00:29:52.839 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:29:52.839 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:29:52.839 Cannot find device "nvmf_init_if" 00:29:52.839 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:29:52.839 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:29:52.839 Cannot find device "nvmf_init_if2" 00:29:52.839 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:29:52.839 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:52.839 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:52.839 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:29:52.839 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:52.839 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:52.839 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:29:52.839 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:29:52.839 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:52.839 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:29:52.839 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:52.839 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:52.839 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:52.839 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:52.839 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:52.839 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:29:52.839 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:29:52.839 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:29:52.839 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:29:52.839 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:29:53.100 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:29:53.100 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:29:53.100 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:29:53.100 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:29:53.100 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:53.100 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:53.100 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:53.100 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:29:53.100 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:29:53.100 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:29:53.100 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:29:53.100 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:53.100 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:53.100 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:53.100 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:29:53.101 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:29:53.101 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:29:53.101 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:53.101 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:29:53.101 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:29:53.101 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:53.101 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:29:53.101 00:29:53.101 --- 10.0.0.3 ping statistics --- 00:29:53.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:53.101 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:29:53.101 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:29:53.101 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:29:53.101 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:29:53.101 00:29:53.101 --- 10.0.0.4 ping statistics --- 00:29:53.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:53.101 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:29:53.101 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:53.101 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:53.101 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:29:53.101 00:29:53.101 --- 10.0.0.1 ping statistics --- 00:29:53.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:53.101 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:29:53.101 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:29:53.101 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:53.101 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:29:53.101 00:29:53.101 --- 10.0.0.2 ping statistics --- 00:29:53.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:53.101 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:29:53.101 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:53.101 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@461 -- # return 0 00:29:53.101 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:53.101 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:53.101 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:53.101 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:53.101 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:53.101 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:53.101 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:53.101 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:29:53.101 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:53.101 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:53.101 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:53.101 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # nvmfpid=106177 00:29:53.101 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # waitforlisten 106177 00:29:53.101 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:29:53.101 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@833 -- # '[' -z 106177 ']' 00:29:53.101 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:53.101 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:53.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:53.101 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:53.101 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:53.101 14:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:53.101 [2024-11-06 14:34:20.667652] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:29:53.101 [2024-11-06 14:34:20.667746] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:53.362 [2024-11-06 14:34:20.827519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:53.362 [2024-11-06 14:34:20.932549] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:53.362 [2024-11-06 14:34:20.932599] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:53.362 [2024-11-06 14:34:20.932615] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:53.362 [2024-11-06 14:34:20.932630] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:53.362 [2024-11-06 14:34:20.932640] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:53.362 [2024-11-06 14:34:20.934416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:53.362 [2024-11-06 14:34:20.934503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:53.933 14:34:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:53.933 14:34:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@866 -- # return 0 00:29:53.933 14:34:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:53.933 14:34:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:53.933 14:34:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:54.194 14:34:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:54.194 14:34:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:54.194 14:34:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:54.194 [2024-11-06 14:34:21.772099] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:54.194 14:34:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:29:54.453 Malloc0 00:29:54.453 14:34:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:54.713 14:34:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:54.972 14:34:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:29:55.231 [2024-11-06 14:34:22.695768] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:29:55.231 14:34:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:29:55.231 14:34:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=106268 00:29:55.231 14:34:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 106268 /var/tmp/bdevperf.sock 00:29:55.231 14:34:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@833 -- # '[' -z 106268 ']' 00:29:55.231 14:34:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:55.231 14:34:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:55.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:55.231 14:34:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:55.231 14:34:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:55.231 14:34:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:55.231 [2024-11-06 14:34:22.781978] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:29:55.231 [2024-11-06 14:34:22.782099] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106268 ] 00:29:55.489 [2024-11-06 14:34:22.942427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:55.489 [2024-11-06 14:34:23.046310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:56.056 14:34:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:56.056 14:34:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@866 -- # return 0 00:29:56.056 14:34:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:29:56.314 14:34:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:29:56.573 NVMe0n1 00:29:56.573 14:34:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=106316 00:29:56.573 14:34:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:56.573 14:34:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:29:56.573 Running I/O for 10 seconds... 00:29:57.507 14:34:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:29:57.769 10430.00 IOPS, 40.74 MiB/s [2024-11-06T14:34:25.404Z] [2024-11-06 14:34:25.269992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.769 [2024-11-06 14:34:25.270041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.769 [2024-11-06 14:34:25.270048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.769 [2024-11-06 14:34:25.270053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.769 [2024-11-06 14:34:25.270060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.769 [2024-11-06 14:34:25.270065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.769 [2024-11-06 14:34:25.270070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.769 [2024-11-06 14:34:25.270075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.769 [2024-11-06 14:34:25.270080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.769 [2024-11-06 14:34:25.270085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.769 [2024-11-06 14:34:25.270090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.769 [2024-11-06 14:34:25.270096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.769 [2024-11-06 14:34:25.270101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.769 [2024-11-06 14:34:25.270106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.769 [2024-11-06 14:34:25.270112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.769 [2024-11-06 14:34:25.270116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.769 [2024-11-06 14:34:25.270122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.769 [2024-11-06 14:34:25.270127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.769 [2024-11-06 14:34:25.270132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.769 [2024-11-06 14:34:25.270137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.769 [2024-11-06 14:34:25.270142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.769 [2024-11-06 14:34:25.270147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.769 [2024-11-06 14:34:25.270152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.769 [2024-11-06 14:34:25.270157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.769 [2024-11-06 14:34:25.270162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.769 [2024-11-06 14:34:25.270168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.769 [2024-11-06 14:34:25.270173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.769 [2024-11-06 14:34:25.270178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.769 [2024-11-06 14:34:25.270183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.769 [2024-11-06 14:34:25.270188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.769 [2024-11-06 14:34:25.270194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.769 [2024-11-06 14:34:25.270199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.769 [2024-11-06 14:34:25.270205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.769 [2024-11-06 14:34:25.270210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.769 [2024-11-06 14:34:25.270215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.769 [2024-11-06 14:34:25.270220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.769 [2024-11-06 14:34:25.270225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.769 [2024-11-06 14:34:25.270230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.769 [2024-11-06 14:34:25.270236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.769 [2024-11-06 14:34:25.270241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.769 [2024-11-06 14:34:25.270246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.769 [2024-11-06 14:34:25.270251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.770 [2024-11-06 14:34:25.270256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.770 [2024-11-06 14:34:25.270261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.770 [2024-11-06 14:34:25.270266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.770 [2024-11-06 14:34:25.270271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.770 [2024-11-06 14:34:25.270276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.770 [2024-11-06 14:34:25.270282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.770 [2024-11-06 14:34:25.270287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.770 [2024-11-06 14:34:25.270292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.770 [2024-11-06 14:34:25.270297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.770 [2024-11-06 14:34:25.270302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.770 [2024-11-06 14:34:25.270308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.770 [2024-11-06 14:34:25.270313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.770 [2024-11-06 14:34:25.270318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.770 [2024-11-06 14:34:25.270323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.770 [2024-11-06 14:34:25.270328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.770 [2024-11-06 14:34:25.270332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.770 [2024-11-06 14:34:25.270337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.770 [2024-11-06 14:34:25.270343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.770 [2024-11-06 14:34:25.270348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.770 [2024-11-06 14:34:25.270354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.770 [2024-11-06 14:34:25.270359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.770 [2024-11-06 14:34:25.270364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.770 [2024-11-06 14:34:25.270369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.770 [2024-11-06 14:34:25.270374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.770 [2024-11-06 14:34:25.270379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.770 [2024-11-06 14:34:25.270384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.770 [2024-11-06 14:34:25.270389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.770 [2024-11-06 14:34:25.270394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.770 [2024-11-06 14:34:25.270399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.770 [2024-11-06 14:34:25.270404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.770 [2024-11-06 14:34:25.270409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.770 [2024-11-06 14:34:25.270414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.770 [2024-11-06 14:34:25.270419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.770 [2024-11-06 14:34:25.270424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.770 [2024-11-06 14:34:25.270432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.770 [2024-11-06 14:34:25.270437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.770 [2024-11-06 14:34:25.270442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.770 [2024-11-06 14:34:25.270447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.770 [2024-11-06 14:34:25.270452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.770 [2024-11-06 14:34:25.270457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.770 [2024-11-06 14:34:25.270462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.770 [2024-11-06 14:34:25.270467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.770 [2024-11-06 14:34:25.270472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.770 [2024-11-06 14:34:25.270477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.770 [2024-11-06 14:34:25.270487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.770 [2024-11-06 14:34:25.270492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.770 [2024-11-06 14:34:25.270498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.770 [2024-11-06 14:34:25.270502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.770 [2024-11-06 14:34:25.270507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.770 [2024-11-06 14:34:25.270512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.770 [2024-11-06 14:34:25.270518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.770 [2024-11-06 14:34:25.270523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.770 [2024-11-06 14:34:25.270529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.770 [2024-11-06 14:34:25.270534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.770 [2024-11-06 14:34:25.270539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.770 [2024-11-06 14:34:25.270543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.770 [2024-11-06 14:34:25.270549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.770 [2024-11-06 14:34:25.270553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.770 [2024-11-06 14:34:25.270558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:29:57.770 [2024-11-06 14:34:25.272123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:91648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.770 [2024-11-06 14:34:25.272168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.770 [2024-11-06 14:34:25.272195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:91656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.770 [2024-11-06 14:34:25.272205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.770 [2024-11-06 14:34:25.272218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:91664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.770 [2024-11-06 14:34:25.272226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.770 [2024-11-06 14:34:25.272238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:91672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.770 [2024-11-06 14:34:25.272246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.770 [2024-11-06 14:34:25.272319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:91680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.770 [2024-11-06 14:34:25.272331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.770 [2024-11-06 14:34:25.272341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:91688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.770 [2024-11-06 14:34:25.272350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.770 [2024-11-06 14:34:25.272360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:91696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.770 [2024-11-06 14:34:25.272369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.770 [2024-11-06 14:34:25.272381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:91704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.770 [2024-11-06 14:34:25.272391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.770 [2024-11-06 14:34:25.272750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:91712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.770 [2024-11-06 14:34:25.272787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.770 [2024-11-06 14:34:25.272803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:91720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.770 [2024-11-06 14:34:25.272813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.770 [2024-11-06 14:34:25.272824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:91728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.770 [2024-11-06 14:34:25.272833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.770 [2024-11-06 14:34:25.272843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:91736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.771 [2024-11-06 14:34:25.272852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.771 [2024-11-06 14:34:25.272862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:91744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.771 [2024-11-06 14:34:25.272870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.771 [2024-11-06 14:34:25.272881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:91752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.771 [2024-11-06 14:34:25.272890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.771 [2024-11-06 14:34:25.272900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:91760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.771 [2024-11-06 14:34:25.272909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.771 [2024-11-06 14:34:25.272919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:91768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.771 [2024-11-06 14:34:25.273534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.771 [2024-11-06 14:34:25.273579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:91776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.771 [2024-11-06 14:34:25.273597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.771 [2024-11-06 14:34:25.273705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:91784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.771 [2024-11-06 14:34:25.273730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.771 [2024-11-06 14:34:25.273863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:91792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.771 [2024-11-06 14:34:25.273973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.771 [2024-11-06 14:34:25.273993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:91800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.771 [2024-11-06 14:34:25.274002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.771 [2024-11-06 14:34:25.274014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:91808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.771 [2024-11-06 14:34:25.274302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.771 [2024-11-06 14:34:25.274396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:91816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.771 [2024-11-06 14:34:25.274409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.771 [2024-11-06 14:34:25.274421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:91824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.771 [2024-11-06 14:34:25.274431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.771 [2024-11-06 14:34:25.274441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:91832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.771 [2024-11-06 14:34:25.274450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.771 [2024-11-06 14:34:25.274461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:91840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.771 [2024-11-06 14:34:25.274601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.771 [2024-11-06 14:34:25.274721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:91848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.771 [2024-11-06 14:34:25.274731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.771 [2024-11-06 14:34:25.274743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:91856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.771 [2024-11-06 14:34:25.274751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.771 [2024-11-06 14:34:25.274762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:91864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.771 [2024-11-06 14:34:25.274845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.771 [2024-11-06 14:34:25.274856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:91872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.771 [2024-11-06 14:34:25.274867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.771 [2024-11-06 14:34:25.274877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:91880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.771 [2024-11-06 14:34:25.274885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.771 [2024-11-06 14:34:25.274896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:91888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.771 [2024-11-06 14:34:25.274904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.771 [2024-11-06 14:34:25.274914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:91896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.771 [2024-11-06 14:34:25.274924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.771 [2024-11-06 14:34:25.275052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:91904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.771 [2024-11-06 14:34:25.275069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.771 [2024-11-06 14:34:25.275134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:91912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.771 [2024-11-06 14:34:25.275145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.771 [2024-11-06 14:34:25.275156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:91920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.771 [2024-11-06 14:34:25.275164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.771 [2024-11-06 14:34:25.275175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:91928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.771 [2024-11-06 14:34:25.275183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.771 [2024-11-06 14:34:25.275195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:91936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.771 [2024-11-06 14:34:25.275204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.771 [2024-11-06 14:34:25.275214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:91944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.771 [2024-11-06 14:34:25.275223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.771 [2024-11-06 14:34:25.275442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:91952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.771 [2024-11-06 14:34:25.275458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.771 [2024-11-06 14:34:25.275469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:91960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.771 [2024-11-06 14:34:25.275478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.771 [2024-11-06 14:34:25.275488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:91968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.771 [2024-11-06 14:34:25.275498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.771 [2024-11-06 14:34:25.275509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:91976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.771 [2024-11-06 14:34:25.275517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.771 [2024-11-06 14:34:25.275527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:91984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.771 [2024-11-06 14:34:25.275535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.771 [2024-11-06 14:34:25.275547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:91992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.771 [2024-11-06 14:34:25.275555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.771 [2024-11-06 14:34:25.275566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:92000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.771 [2024-11-06 14:34:25.275790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.771 [2024-11-06 14:34:25.275810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:92008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.771 [2024-11-06 14:34:25.275819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.771 [2024-11-06 14:34:25.275830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:92016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.771 [2024-11-06 14:34:25.275848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.771 [2024-11-06 14:34:25.275859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:92024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.771 [2024-11-06 14:34:25.275867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.771 [2024-11-06 14:34:25.275877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:92032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.771 [2024-11-06 14:34:25.275886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.771 [2024-11-06 14:34:25.275897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:92040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.771 [2024-11-06 14:34:25.275905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.771 [2024-11-06 14:34:25.275916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:92048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.771 [2024-11-06 14:34:25.275924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.771 [2024-11-06 14:34:25.275935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:92056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.772 [2024-11-06 14:34:25.275944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.772 [2024-11-06 14:34:25.275968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:92064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.772 [2024-11-06 14:34:25.275977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.772 [2024-11-06 14:34:25.275988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:92072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.772 [2024-11-06 14:34:25.276181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.772 [2024-11-06 14:34:25.276202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:92080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.772 [2024-11-06 14:34:25.276212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.772 [2024-11-06 14:34:25.276223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:92088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.772 [2024-11-06 14:34:25.276232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.772 [2024-11-06 14:34:25.276242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:92096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.772 [2024-11-06 14:34:25.276251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.772 [2024-11-06 14:34:25.276262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:92104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.772 [2024-11-06 14:34:25.276271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.772 [2024-11-06 14:34:25.276378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:92112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.772 [2024-11-06 14:34:25.276393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.772 [2024-11-06 14:34:25.276406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:92120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.772 [2024-11-06 14:34:25.276414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.772 [2024-11-06 14:34:25.276513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:92128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.772 [2024-11-06 14:34:25.276525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.772 [2024-11-06 14:34:25.276536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:92184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.772 [2024-11-06 14:34:25.276545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.772 [2024-11-06 14:34:25.276555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:92192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.772 [2024-11-06 14:34:25.276650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.772 [2024-11-06 14:34:25.276667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:92200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.772 [2024-11-06 14:34:25.276676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.772 [2024-11-06 14:34:25.276688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:92208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.772 [2024-11-06 14:34:25.276698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.772 [2024-11-06 14:34:25.276827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:92216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.772 [2024-11-06 14:34:25.276842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.772 [2024-11-06 14:34:25.276933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:92224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.772 [2024-11-06 14:34:25.276943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.772 [2024-11-06 14:34:25.276966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:92232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.772 [2024-11-06 14:34:25.276975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.772 [2024-11-06 14:34:25.276987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.772 [2024-11-06 14:34:25.277132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.772 [2024-11-06 14:34:25.277243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:92248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.772 [2024-11-06 14:34:25.277253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.772 [2024-11-06 14:34:25.277264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:92256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.772 [2024-11-06 14:34:25.277272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.772 [2024-11-06 14:34:25.277283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:92264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.772 [2024-11-06 14:34:25.277291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.772 [2024-11-06 14:34:25.277302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:92272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.772 [2024-11-06 14:34:25.277310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.772 [2024-11-06 14:34:25.277320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:92280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.772 [2024-11-06 14:34:25.277329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.772 [2024-11-06 14:34:25.277340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:92288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.772 [2024-11-06 14:34:25.277350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.772 [2024-11-06 14:34:25.277361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:92296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.772 [2024-11-06 14:34:25.277370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.772 [2024-11-06 14:34:25.277381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:92304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.772 [2024-11-06 14:34:25.277628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.772 [2024-11-06 14:34:25.277646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:92312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.772 [2024-11-06 14:34:25.277656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.772 [2024-11-06 14:34:25.277668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:92320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.772 [2024-11-06 14:34:25.277678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.772 [2024-11-06 14:34:25.277688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:92328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.772 [2024-11-06 14:34:25.277696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.772 [2024-11-06 14:34:25.277707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:92336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.772 [2024-11-06 14:34:25.277715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.772 [2024-11-06 14:34:25.277727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:92344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.772 [2024-11-06 14:34:25.277735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.772 [2024-11-06 14:34:25.277746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:92352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.772 [2024-11-06 14:34:25.277755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.772 [2024-11-06 14:34:25.277765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:92360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.772 [2024-11-06 14:34:25.277773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.772 [2024-11-06 14:34:25.277784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:92368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.772 [2024-11-06 14:34:25.277794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.772 [2024-11-06 14:34:25.277805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:92376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.772 [2024-11-06 14:34:25.277814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.772 [2024-11-06 14:34:25.277930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:92384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.772 [2024-11-06 14:34:25.277959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.772 [2024-11-06 14:34:25.278050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:92392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.772 [2024-11-06 14:34:25.278060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.772 [2024-11-06 14:34:25.278071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:92136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.772 [2024-11-06 14:34:25.278080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.772 [2024-11-06 14:34:25.278092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:92144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.772 [2024-11-06 14:34:25.278099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.772 [2024-11-06 14:34:25.278207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:92152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.773 [2024-11-06 14:34:25.278224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.773 [2024-11-06 14:34:25.278236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:92160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.773 [2024-11-06 14:34:25.278245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.773 [2024-11-06 14:34:25.278341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:92168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.773 [2024-11-06 14:34:25.278357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.773 [2024-11-06 14:34:25.278369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:92176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.773 [2024-11-06 14:34:25.278382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.773 [2024-11-06 14:34:25.278393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:92400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.773 [2024-11-06 14:34:25.278523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.773 [2024-11-06 14:34:25.278542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:92408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.773 [2024-11-06 14:34:25.278733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.773 [2024-11-06 14:34:25.278751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:92416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.773 [2024-11-06 14:34:25.278760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.773 [2024-11-06 14:34:25.278771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:92424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.773 [2024-11-06 14:34:25.278779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.773 [2024-11-06 14:34:25.278790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:92432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.773 [2024-11-06 14:34:25.278799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.773 [2024-11-06 14:34:25.278809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:92440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.773 [2024-11-06 14:34:25.278818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.773 [2024-11-06 14:34:25.278828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:92448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.773 [2024-11-06 14:34:25.278931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.773 [2024-11-06 14:34:25.278962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:92456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.773 [2024-11-06 14:34:25.278973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.773 [2024-11-06 14:34:25.279116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:92464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.773 [2024-11-06 14:34:25.279133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.773 [2024-11-06 14:34:25.279214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:92472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.773 [2024-11-06 14:34:25.279229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.773 [2024-11-06 14:34:25.279241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:92480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.773 [2024-11-06 14:34:25.279249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.773 [2024-11-06 14:34:25.279260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:92488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.773 [2024-11-06 14:34:25.279269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.773 [2024-11-06 14:34:25.279281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:92496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.773 [2024-11-06 14:34:25.279397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.773 [2024-11-06 14:34:25.279409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:92504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.773 [2024-11-06 14:34:25.279418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.773 [2024-11-06 14:34:25.279429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.773 [2024-11-06 14:34:25.279492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.773 [2024-11-06 14:34:25.279505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:92520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.773 [2024-11-06 14:34:25.279517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.773 [2024-11-06 14:34:25.279527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:92528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.773 [2024-11-06 14:34:25.279543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.773 [2024-11-06 14:34:25.279555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:92536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.773 [2024-11-06 14:34:25.279563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.773 [2024-11-06 14:34:25.279574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:92544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.773 [2024-11-06 14:34:25.279582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.773 [2024-11-06 14:34:25.279592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:92552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.773 [2024-11-06 14:34:25.279696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.773 [2024-11-06 14:34:25.279707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:92560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.773 [2024-11-06 14:34:25.279717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.773 [2024-11-06 14:34:25.279728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:92568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.773 [2024-11-06 14:34:25.279736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.773 [2024-11-06 14:34:25.279815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:92576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.773 [2024-11-06 14:34:25.279823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.773 [2024-11-06 14:34:25.279835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:92584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.773 [2024-11-06 14:34:25.279843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.773 [2024-11-06 14:34:25.279855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:92592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.773 [2024-11-06 14:34:25.279863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.773 [2024-11-06 14:34:25.279874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:92600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.773 [2024-11-06 14:34:25.279882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.773 [2024-11-06 14:34:25.279894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:92608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.773 [2024-11-06 14:34:25.279908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.773 [2024-11-06 14:34:25.279920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:92616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.773 [2024-11-06 14:34:25.279928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.773 [2024-11-06 14:34:25.279939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:92624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.773 [2024-11-06 14:34:25.280090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.773 [2024-11-06 14:34:25.280218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:92632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.773 [2024-11-06 14:34:25.280228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.773 [2024-11-06 14:34:25.280335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:92640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.773 [2024-11-06 14:34:25.280345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.774 [2024-11-06 14:34:25.280356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:92648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.774 [2024-11-06 14:34:25.280366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.774 [2024-11-06 14:34:25.280529] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:57.774 [2024-11-06 14:34:25.280616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92656 len:8 PRP1 0x0 PRP2 0x0 00:29:57.774 [2024-11-06 14:34:25.280632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.774 [2024-11-06 14:34:25.280646] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:57.774 [2024-11-06 14:34:25.280659] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:57.774 [2024-11-06 14:34:25.280668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92664 len:8 PRP1 0x0 PRP2 0x0 00:29:57.774 [2024-11-06 14:34:25.280678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.774 [2024-11-06 14:34:25.281228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:57.774 [2024-11-06 14:34:25.281255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.774 [2024-11-06 14:34:25.281266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:57.774 [2024-11-06 14:34:25.281275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.774 [2024-11-06 14:34:25.281285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:57.774 [2024-11-06 14:34:25.281296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.774 [2024-11-06 14:34:25.281305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:57.774 [2024-11-06 14:34:25.281313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.774 [2024-11-06 14:34:25.281322] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:29:57.774 [2024-11-06 14:34:25.281719] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:29:57.774 [2024-11-06 14:34:25.281758] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:29:57.774 [2024-11-06 14:34:25.281914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.774 [2024-11-06 14:34:25.281939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.3, port=4420 00:29:57.774 [2024-11-06 14:34:25.281964] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:29:57.774 [2024-11-06 14:34:25.281984] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:29:57.774 [2024-11-06 14:34:25.282000] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:29:57.774 [2024-11-06 14:34:25.282010] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:29:57.774 [2024-11-06 14:34:25.282104] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:29:57.774 [2024-11-06 14:34:25.282124] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:29:57.774 [2024-11-06 14:34:25.282134] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:29:57.774 14:34:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:29:59.645 5728.00 IOPS, 22.38 MiB/s [2024-11-06T14:34:27.539Z] 3818.67 IOPS, 14.92 MiB/s [2024-11-06T14:34:27.539Z] [2024-11-06 14:34:27.282347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.904 [2024-11-06 14:34:27.282388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.3, port=4420 00:29:59.904 [2024-11-06 14:34:27.282399] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:29:59.904 [2024-11-06 14:34:27.282418] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:29:59.904 [2024-11-06 14:34:27.282433] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:29:59.904 [2024-11-06 14:34:27.282440] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:29:59.904 [2024-11-06 14:34:27.282449] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:29:59.904 [2024-11-06 14:34:27.282458] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:29:59.904 [2024-11-06 14:34:27.282467] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:29:59.904 14:34:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:29:59.904 14:34:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:59.904 14:34:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:29:59.904 14:34:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:29:59.904 14:34:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:29:59.904 14:34:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:29:59.904 14:34:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:30:00.162 14:34:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:30:00.162 14:34:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:30:01.663 2864.00 IOPS, 11.19 MiB/s [2024-11-06T14:34:29.298Z] 2291.20 IOPS, 8.95 MiB/s [2024-11-06T14:34:29.298Z] [2024-11-06 14:34:29.282583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.663 [2024-11-06 14:34:29.282631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.3, port=4420 00:30:01.663 [2024-11-06 14:34:29.282646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:30:01.663 [2024-11-06 14:34:29.282668] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:30:01.663 [2024-11-06 14:34:29.282694] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:30:01.663 [2024-11-06 14:34:29.282704] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:30:01.663 [2024-11-06 14:34:29.282715] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:30:01.663 [2024-11-06 14:34:29.282727] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:30:01.663 [2024-11-06 14:34:29.282737] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:30:03.970 1909.33 IOPS, 7.46 MiB/s [2024-11-06T14:34:31.605Z] 1636.57 IOPS, 6.39 MiB/s [2024-11-06T14:34:31.605Z] [2024-11-06 14:34:31.282811] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:30:03.970 [2024-11-06 14:34:31.282855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:30:03.970 [2024-11-06 14:34:31.282864] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:30:03.970 [2024-11-06 14:34:31.282873] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:30:03.970 [2024-11-06 14:34:31.282885] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:30:04.794 1432.00 IOPS, 5.59 MiB/s 00:30:04.794 Latency(us) 00:30:04.794 [2024-11-06T14:34:32.429Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:04.794 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:30:04.794 Verification LBA range: start 0x0 length 0x4000 00:30:04.794 NVMe0n1 : 8.12 1411.16 5.51 15.77 0.00 89418.33 2016.49 7020619.62 00:30:04.794 [2024-11-06T14:34:32.429Z] =================================================================================================================== 00:30:04.794 [2024-11-06T14:34:32.429Z] Total : 1411.16 5.51 15.77 0.00 89418.33 2016.49 7020619.62 00:30:04.794 { 00:30:04.794 "results": [ 00:30:04.794 { 00:30:04.794 "job": "NVMe0n1", 00:30:04.794 "core_mask": "0x4", 00:30:04.794 "workload": "verify", 00:30:04.794 "status": "finished", 00:30:04.794 "verify_range": { 00:30:04.794 "start": 0, 00:30:04.794 "length": 16384 00:30:04.794 }, 00:30:04.794 "queue_depth": 128, 00:30:04.794 "io_size": 4096, 00:30:04.794 "runtime": 8.118118, 00:30:04.794 "iops": 1411.1645088184232, 00:30:04.794 "mibps": 5.512361362571966, 00:30:04.794 "io_failed": 128, 00:30:04.794 "io_timeout": 0, 00:30:04.794 "avg_latency_us": 89418.33302167447, 00:30:04.794 "min_latency_us": 2016.4923076923078, 00:30:04.794 "max_latency_us": 7020619.618461538 00:30:04.794 } 00:30:04.794 ], 00:30:04.794 "core_count": 1 00:30:04.794 } 00:30:05.361 14:34:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:30:05.361 14:34:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:30:05.361 14:34:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:05.361 14:34:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:30:05.361 14:34:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:30:05.361 14:34:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:30:05.361 14:34:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:30:05.626 14:34:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:30:05.626 14:34:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 106316 00:30:05.626 14:34:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 106268 00:30:05.626 14:34:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # '[' -z 106268 ']' 00:30:05.626 14:34:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # kill -0 106268 00:30:05.626 14:34:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # uname 00:30:05.626 14:34:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:05.626 14:34:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 106268 00:30:05.626 14:34:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:30:05.626 14:34:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:30:05.626 killing process with pid 106268 00:30:05.626 14:34:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@970 -- # echo 'killing process with pid 106268' 00:30:05.626 14:34:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@971 -- # kill 106268 00:30:05.626 Received shutdown signal, test time was about 8.982671 seconds 00:30:05.626 00:30:05.626 Latency(us) 00:30:05.626 [2024-11-06T14:34:33.261Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:05.626 [2024-11-06T14:34:33.261Z] =================================================================================================================== 00:30:05.626 [2024-11-06T14:34:33.261Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:05.626 14:34:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@976 -- # wait 106268 00:30:06.193 14:34:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:30:06.452 [2024-11-06 14:34:33.962171] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:30:06.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:06.452 14:34:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=106475 00:30:06.452 14:34:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 106475 /var/tmp/bdevperf.sock 00:30:06.452 14:34:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:30:06.452 14:34:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@833 -- # '[' -z 106475 ']' 00:30:06.452 14:34:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:06.452 14:34:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:06.452 14:34:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:06.452 14:34:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:06.452 14:34:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:30:06.452 [2024-11-06 14:34:34.043213] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:30:06.452 [2024-11-06 14:34:34.043332] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106475 ] 00:30:06.710 [2024-11-06 14:34:34.196131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:06.710 [2024-11-06 14:34:34.282239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:07.276 14:34:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:07.276 14:34:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@866 -- # return 0 00:30:07.276 14:34:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:30:07.534 14:34:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:30:07.792 NVMe0n1 00:30:07.792 14:34:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=106522 00:30:07.792 14:34:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:07.792 14:34:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:30:08.051 Running I/O for 10 seconds... 00:30:08.988 14:34:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:30:08.988 11164.00 IOPS, 43.61 MiB/s [2024-11-06T14:34:36.623Z] [2024-11-06 14:34:36.557420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:99008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.988 [2024-11-06 14:34:36.557476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.988 [2024-11-06 14:34:36.557500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.988 [2024-11-06 14:34:36.557511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.988 [2024-11-06 14:34:36.557524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:99024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.988 [2024-11-06 14:34:36.557533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.988 [2024-11-06 14:34:36.557544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:99032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.988 [2024-11-06 14:34:36.557552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.988 [2024-11-06 14:34:36.557563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:99040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.988 [2024-11-06 14:34:36.557572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.988 [2024-11-06 14:34:36.557583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:99048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.988 [2024-11-06 14:34:36.557592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.988 [2024-11-06 14:34:36.557602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:99056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.988 [2024-11-06 14:34:36.557611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.988 [2024-11-06 14:34:36.557623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:98168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.988 [2024-11-06 14:34:36.557632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.988 [2024-11-06 14:34:36.557642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:98176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.988 [2024-11-06 14:34:36.557651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.988 [2024-11-06 14:34:36.557662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:98184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.988 [2024-11-06 14:34:36.557670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.988 [2024-11-06 14:34:36.557681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:98192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.988 [2024-11-06 14:34:36.557688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.988 [2024-11-06 14:34:36.557699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:98200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.988 [2024-11-06 14:34:36.557708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.988 [2024-11-06 14:34:36.557784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:98208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.988 [2024-11-06 14:34:36.557794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.988 [2024-11-06 14:34:36.557805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:98216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.988 [2024-11-06 14:34:36.557814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.988 [2024-11-06 14:34:36.557824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:98224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.988 [2024-11-06 14:34:36.557832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.988 [2024-11-06 14:34:36.557843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:98232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.988 [2024-11-06 14:34:36.557851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.988 [2024-11-06 14:34:36.557861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:98240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.988 [2024-11-06 14:34:36.557870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.988 [2024-11-06 14:34:36.558508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:98248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.988 [2024-11-06 14:34:36.558548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.988 [2024-11-06 14:34:36.558570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:98256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.988 [2024-11-06 14:34:36.558580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.988 [2024-11-06 14:34:36.558591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:98264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.988 [2024-11-06 14:34:36.558600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.988 [2024-11-06 14:34:36.558610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:98272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.988 [2024-11-06 14:34:36.558619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.988 [2024-11-06 14:34:36.558629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.988 [2024-11-06 14:34:36.558638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.988 [2024-11-06 14:34:36.558648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.988 [2024-11-06 14:34:36.558657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.988 [2024-11-06 14:34:36.559168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.988 [2024-11-06 14:34:36.559206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.988 [2024-11-06 14:34:36.559226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:98304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.988 [2024-11-06 14:34:36.559236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.988 [2024-11-06 14:34:36.559248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:98312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.988 [2024-11-06 14:34:36.559257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.988 [2024-11-06 14:34:36.559269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:98320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.988 [2024-11-06 14:34:36.559278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.988 [2024-11-06 14:34:36.559288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:98328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.988 [2024-11-06 14:34:36.559297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.988 [2024-11-06 14:34:36.559308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:98336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.988 [2024-11-06 14:34:36.559316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.988 [2024-11-06 14:34:36.559830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:98344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.988 [2024-11-06 14:34:36.559863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.988 [2024-11-06 14:34:36.559877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:98352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.988 [2024-11-06 14:34:36.559887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.988 [2024-11-06 14:34:36.559898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:98360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.988 [2024-11-06 14:34:36.559907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.988 [2024-11-06 14:34:36.559917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:98368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.988 [2024-11-06 14:34:36.559926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.988 [2024-11-06 14:34:36.559937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:98376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.988 [2024-11-06 14:34:36.559960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.988 [2024-11-06 14:34:36.560068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:98384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.988 [2024-11-06 14:34:36.560089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.988 [2024-11-06 14:34:36.560104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:98392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.989 [2024-11-06 14:34:36.560332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.989 [2024-11-06 14:34:36.560366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:98400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.989 [2024-11-06 14:34:36.560376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.989 [2024-11-06 14:34:36.560387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:98408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.989 [2024-11-06 14:34:36.560396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.989 [2024-11-06 14:34:36.560406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:98416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.989 [2024-11-06 14:34:36.560416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.989 [2024-11-06 14:34:36.560427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:98424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.989 [2024-11-06 14:34:36.560435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.989 [2024-11-06 14:34:36.560445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:98432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.989 [2024-11-06 14:34:36.560801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.989 [2024-11-06 14:34:36.560832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:98440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.989 [2024-11-06 14:34:36.560843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.989 [2024-11-06 14:34:36.560855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.989 [2024-11-06 14:34:36.560863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.989 [2024-11-06 14:34:36.560874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:98456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.989 [2024-11-06 14:34:36.560882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.989 [2024-11-06 14:34:36.560893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:98464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.989 [2024-11-06 14:34:36.561050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.989 [2024-11-06 14:34:36.561068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:98472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.989 [2024-11-06 14:34:36.561077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.989 [2024-11-06 14:34:36.561090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:98480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.989 [2024-11-06 14:34:36.561206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.989 [2024-11-06 14:34:36.561218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:98488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.989 [2024-11-06 14:34:36.561324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.989 [2024-11-06 14:34:36.561336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:98496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.989 [2024-11-06 14:34:36.561345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.989 [2024-11-06 14:34:36.561356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:98504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.989 [2024-11-06 14:34:36.561555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.989 [2024-11-06 14:34:36.561575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:98512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.989 [2024-11-06 14:34:36.561584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.989 [2024-11-06 14:34:36.561595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:98520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.989 [2024-11-06 14:34:36.561605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.989 [2024-11-06 14:34:36.561615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:98528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.989 [2024-11-06 14:34:36.561624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.989 [2024-11-06 14:34:36.561635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:98536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.989 [2024-11-06 14:34:36.561643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.989 [2024-11-06 14:34:36.561653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:98544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.989 [2024-11-06 14:34:36.561662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.989 [2024-11-06 14:34:36.561673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:98552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.989 [2024-11-06 14:34:36.561681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.989 [2024-11-06 14:34:36.561692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:98560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.989 [2024-11-06 14:34:36.561702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.989 [2024-11-06 14:34:36.561713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:98568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.989 [2024-11-06 14:34:36.561721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.989 [2024-11-06 14:34:36.561732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:98576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.989 [2024-11-06 14:34:36.561740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.989 [2024-11-06 14:34:36.562021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:98584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.989 [2024-11-06 14:34:36.562031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.989 [2024-11-06 14:34:36.562042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:98592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.989 [2024-11-06 14:34:36.562051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.989 [2024-11-06 14:34:36.562062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:98600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.989 [2024-11-06 14:34:36.562071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.989 [2024-11-06 14:34:36.562082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:98608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.989 [2024-11-06 14:34:36.562091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.989 [2024-11-06 14:34:36.562102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:98616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.989 [2024-11-06 14:34:36.562110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.989 [2024-11-06 14:34:36.562121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:98624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.989 [2024-11-06 14:34:36.562129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.989 [2024-11-06 14:34:36.562142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:98632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.989 [2024-11-06 14:34:36.562150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.989 [2024-11-06 14:34:36.562161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:98640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.989 [2024-11-06 14:34:36.562169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.989 [2024-11-06 14:34:36.562180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:98648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.989 [2024-11-06 14:34:36.562188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.989 [2024-11-06 14:34:36.562439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:98656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.989 [2024-11-06 14:34:36.562457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.989 [2024-11-06 14:34:36.562470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:98664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.989 [2024-11-06 14:34:36.562479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.989 [2024-11-06 14:34:36.562490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.989 [2024-11-06 14:34:36.562501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.989 [2024-11-06 14:34:36.562511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:98680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.989 [2024-11-06 14:34:36.562520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.989 [2024-11-06 14:34:36.562530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:98688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.989 [2024-11-06 14:34:36.562538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.989 [2024-11-06 14:34:36.562548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:98696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.989 [2024-11-06 14:34:36.562557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.989 [2024-11-06 14:34:36.562567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:98704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.989 [2024-11-06 14:34:36.562575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.989 [2024-11-06 14:34:36.562586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:98712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.990 [2024-11-06 14:34:36.562594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.990 [2024-11-06 14:34:36.562605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:98720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.990 [2024-11-06 14:34:36.562613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.990 [2024-11-06 14:34:36.562624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:98728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.990 [2024-11-06 14:34:36.562632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.990 [2024-11-06 14:34:36.562643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:98736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.990 [2024-11-06 14:34:36.562651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.990 [2024-11-06 14:34:36.562661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:98744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.990 [2024-11-06 14:34:36.562913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.990 [2024-11-06 14:34:36.562926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:98752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.990 [2024-11-06 14:34:36.562934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.990 [2024-11-06 14:34:36.562945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:98760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.990 [2024-11-06 14:34:36.562968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.990 [2024-11-06 14:34:36.562979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:98768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.990 [2024-11-06 14:34:36.562988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.990 [2024-11-06 14:34:36.562999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:98776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.990 [2024-11-06 14:34:36.563008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.990 [2024-11-06 14:34:36.563019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:98784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.990 [2024-11-06 14:34:36.563028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.990 [2024-11-06 14:34:36.563039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:98792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.990 [2024-11-06 14:34:36.563049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.990 [2024-11-06 14:34:36.563060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:98800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.990 [2024-11-06 14:34:36.563070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.990 [2024-11-06 14:34:36.563081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:98808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.990 [2024-11-06 14:34:36.563089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.990 [2024-11-06 14:34:36.563100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:98816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.990 [2024-11-06 14:34:36.563108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.990 [2024-11-06 14:34:36.563310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.990 [2024-11-06 14:34:36.563327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.990 [2024-11-06 14:34:36.563338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:98832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.990 [2024-11-06 14:34:36.563346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.990 [2024-11-06 14:34:36.563357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:98840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.990 [2024-11-06 14:34:36.563366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.990 [2024-11-06 14:34:36.563385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:98848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.990 [2024-11-06 14:34:36.563394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.990 [2024-11-06 14:34:36.563404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:98856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.990 [2024-11-06 14:34:36.563413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.990 [2024-11-06 14:34:36.563424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:98864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.990 [2024-11-06 14:34:36.563432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.990 [2024-11-06 14:34:36.563443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:98872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.990 [2024-11-06 14:34:36.563452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.990 [2024-11-06 14:34:36.563463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.990 [2024-11-06 14:34:36.563472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.990 [2024-11-06 14:34:36.563484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.990 [2024-11-06 14:34:36.563492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.990 [2024-11-06 14:34:36.563503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:99072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.990 [2024-11-06 14:34:36.563511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.990 [2024-11-06 14:34:36.563726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:99080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.990 [2024-11-06 14:34:36.563738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.990 [2024-11-06 14:34:36.563748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:99088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.990 [2024-11-06 14:34:36.563756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.990 [2024-11-06 14:34:36.563767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:98888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.990 [2024-11-06 14:34:36.563776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.990 [2024-11-06 14:34:36.563787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:98896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.990 [2024-11-06 14:34:36.563932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.990 [2024-11-06 14:34:36.564053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:98904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.990 [2024-11-06 14:34:36.564070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.990 [2024-11-06 14:34:36.564081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:98912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.990 [2024-11-06 14:34:36.564091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.990 [2024-11-06 14:34:36.564201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:98920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.990 [2024-11-06 14:34:36.564217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.990 [2024-11-06 14:34:36.564228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:98928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.990 [2024-11-06 14:34:36.564236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.990 [2024-11-06 14:34:36.564458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:98936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.990 [2024-11-06 14:34:36.564477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.990 [2024-11-06 14:34:36.564488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:99096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.990 [2024-11-06 14:34:36.564497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.990 [2024-11-06 14:34:36.564507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:99104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.990 [2024-11-06 14:34:36.564516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.990 [2024-11-06 14:34:36.564628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:99112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.990 [2024-11-06 14:34:36.564650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.990 [2024-11-06 14:34:36.564661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:99120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.990 [2024-11-06 14:34:36.564671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.990 [2024-11-06 14:34:36.564778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:99128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.990 [2024-11-06 14:34:36.564788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.990 [2024-11-06 14:34:36.564799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:99136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.990 [2024-11-06 14:34:36.564808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.990 [2024-11-06 14:34:36.564819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.990 [2024-11-06 14:34:36.564829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.990 [2024-11-06 14:34:36.564945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:99152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.991 [2024-11-06 14:34:36.564967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.991 [2024-11-06 14:34:36.565069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:99160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.991 [2024-11-06 14:34:36.565084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.991 [2024-11-06 14:34:36.565097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:99168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.991 [2024-11-06 14:34:36.565106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.991 [2024-11-06 14:34:36.565237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:99176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.991 [2024-11-06 14:34:36.565316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.991 [2024-11-06 14:34:36.565333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:98944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.991 [2024-11-06 14:34:36.565342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.991 [2024-11-06 14:34:36.565353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:98952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.991 [2024-11-06 14:34:36.565363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.991 [2024-11-06 14:34:36.565373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.991 [2024-11-06 14:34:36.565382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.991 [2024-11-06 14:34:36.565393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:98968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.991 [2024-11-06 14:34:36.565498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.991 [2024-11-06 14:34:36.565516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:98976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.991 [2024-11-06 14:34:36.565527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.991 [2024-11-06 14:34:36.565538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:98984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.991 [2024-11-06 14:34:36.565546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.991 [2024-11-06 14:34:36.565642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:98992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.991 [2024-11-06 14:34:36.565658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.991 [2024-11-06 14:34:36.565671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:99000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.991 [2024-11-06 14:34:36.565680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.991 [2024-11-06 14:34:36.565690] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b280 is same with the state(6) to be set 00:30:08.991 [2024-11-06 14:34:36.565769] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:08.991 [2024-11-06 14:34:36.565783] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:08.991 [2024-11-06 14:34:36.565793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99184 len:8 PRP1 0x0 PRP2 0x0 00:30:08.991 [2024-11-06 14:34:36.565803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.991 [2024-11-06 14:34:36.566253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.991 [2024-11-06 14:34:36.566280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.991 [2024-11-06 14:34:36.566293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.991 [2024-11-06 14:34:36.566302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.991 [2024-11-06 14:34:36.566311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.991 [2024-11-06 14:34:36.566320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.991 [2024-11-06 14:34:36.566329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.991 [2024-11-06 14:34:36.566466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.991 [2024-11-06 14:34:36.566476] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:30:08.991 [2024-11-06 14:34:36.566889] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.991 [2024-11-06 14:34:36.566921] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:30:08.991 [2024-11-06 14:34:36.567185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-11-06 14:34:36.567212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.3, port=4420 00:30:08.991 [2024-11-06 14:34:36.567224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:30:08.991 [2024-11-06 14:34:36.567243] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:30:08.991 [2024-11-06 14:34:36.567363] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.991 [2024-11-06 14:34:36.567381] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.991 [2024-11-06 14:34:36.567393] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.991 [2024-11-06 14:34:36.567505] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.991 [2024-11-06 14:34:36.567525] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.991 14:34:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:30:10.184 6135.50 IOPS, 23.97 MiB/s [2024-11-06T14:34:37.819Z] [2024-11-06 14:34:37.567773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.184 [2024-11-06 14:34:37.567825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.3, port=4420 00:30:10.184 [2024-11-06 14:34:37.567839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:30:10.184 [2024-11-06 14:34:37.567863] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:30:10.184 [2024-11-06 14:34:37.567882] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.184 [2024-11-06 14:34:37.567891] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.184 [2024-11-06 14:34:37.567903] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.184 [2024-11-06 14:34:37.567915] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.184 [2024-11-06 14:34:37.567926] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.184 14:34:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:30:10.184 [2024-11-06 14:34:37.765932] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:30:10.184 14:34:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 106522 00:30:11.118 4090.33 IOPS, 15.98 MiB/s [2024-11-06T14:34:38.753Z] [2024-11-06 14:34:38.587732] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:30:12.987 3067.75 IOPS, 11.98 MiB/s [2024-11-06T14:34:41.557Z] 4394.00 IOPS, 17.16 MiB/s [2024-11-06T14:34:42.493Z] 5499.83 IOPS, 21.48 MiB/s [2024-11-06T14:34:43.867Z] 6317.86 IOPS, 24.68 MiB/s [2024-11-06T14:34:44.803Z] 6955.00 IOPS, 27.17 MiB/s [2024-11-06T14:34:45.739Z] 7461.22 IOPS, 29.15 MiB/s [2024-11-06T14:34:45.739Z] 7850.40 IOPS, 30.67 MiB/s 00:30:18.104 Latency(us) 00:30:18.104 [2024-11-06T14:34:45.739Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:18.104 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:30:18.104 Verification LBA range: start 0x0 length 0x4000 00:30:18.104 NVMe0n1 : 10.01 7856.02 30.69 0.00 0.00 16270.76 1606.89 3032804.43 00:30:18.104 [2024-11-06T14:34:45.739Z] =================================================================================================================== 00:30:18.104 [2024-11-06T14:34:45.739Z] Total : 7856.02 30.69 0.00 0.00 16270.76 1606.89 3032804.43 00:30:18.104 { 00:30:18.104 "results": [ 00:30:18.104 { 00:30:18.104 "job": "NVMe0n1", 00:30:18.104 "core_mask": "0x4", 00:30:18.104 "workload": "verify", 00:30:18.104 "status": "finished", 00:30:18.104 "verify_range": { 00:30:18.104 "start": 0, 00:30:18.104 "length": 16384 00:30:18.104 }, 00:30:18.104 "queue_depth": 128, 00:30:18.105 "io_size": 4096, 00:30:18.105 "runtime": 10.009141, 00:30:18.105 "iops": 7856.018813202852, 00:30:18.105 "mibps": 30.68757348907364, 00:30:18.105 "io_failed": 0, 00:30:18.105 "io_timeout": 0, 00:30:18.105 "avg_latency_us": 16270.758992933002, 00:30:18.105 "min_latency_us": 1606.8923076923077, 00:30:18.105 "max_latency_us": 3032804.4307692307 00:30:18.105 } 00:30:18.105 ], 00:30:18.105 "core_count": 1 00:30:18.105 } 00:30:18.105 14:34:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=106639 00:30:18.105 14:34:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:30:18.105 14:34:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:18.105 Running I/O for 10 seconds... 00:30:19.040 14:34:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:30:19.040 11059.00 IOPS, 43.20 MiB/s [2024-11-06T14:34:46.675Z] [2024-11-06 14:34:46.666822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:30:19.040 [2024-11-06 14:34:46.666867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:30:19.040 [2024-11-06 14:34:46.666874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:30:19.040 [2024-11-06 14:34:46.666880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:30:19.040 [2024-11-06 14:34:46.666885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:30:19.040 [2024-11-06 14:34:46.666891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:30:19.040 [2024-11-06 14:34:46.666897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:30:19.040 [2024-11-06 14:34:46.666903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:30:19.040 [2024-11-06 14:34:46.666908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:30:19.040 [2024-11-06 14:34:46.666913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:30:19.040 [2024-11-06 14:34:46.666918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:30:19.040 [2024-11-06 14:34:46.666923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:30:19.041 [2024-11-06 14:34:46.666928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:30:19.041 [2024-11-06 14:34:46.666933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:30:19.041 [2024-11-06 14:34:46.666938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:30:19.041 [2024-11-06 14:34:46.666944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:30:19.041 [2024-11-06 14:34:46.666960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:30:19.041 [2024-11-06 14:34:46.666965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:30:19.041 [2024-11-06 14:34:46.666970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:30:19.041 [2024-11-06 14:34:46.666976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:30:19.041 [2024-11-06 14:34:46.666981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:30:19.041 [2024-11-06 14:34:46.666986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:30:19.041 [2024-11-06 14:34:46.666992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:30:19.041 [2024-11-06 14:34:46.666997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:30:19.041 [2024-11-06 14:34:46.670571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:19.041 [2024-11-06 14:34:46.670687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.041 [2024-11-06 14:34:46.670727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:19.041 [2024-11-06 14:34:46.670756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.041 [2024-11-06 14:34:46.671179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:19.041 [2024-11-06 14:34:46.671212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.041 [2024-11-06 14:34:46.671242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:19.041 [2024-11-06 14:34:46.671804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.041 [2024-11-06 14:34:46.671872] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:30:19.041 [2024-11-06 14:34:46.672526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.041 [2024-11-06 14:34:46.672600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.041 [2024-11-06 14:34:46.672657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:98384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.041 [2024-11-06 14:34:46.672685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.041 [2024-11-06 14:34:46.673178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:98392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.041 [2024-11-06 14:34:46.673216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.041 [2024-11-06 14:34:46.673587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:98400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.041 [2024-11-06 14:34:46.673645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.041 [2024-11-06 14:34:46.673680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:98408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.041 [2024-11-06 14:34:46.673707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.041 [2024-11-06 14:34:46.673742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:98416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.041 [2024-11-06 14:34:46.674127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.041 [2024-11-06 14:34:46.674165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:98424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.302 [2024-11-06 14:34:46.674537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.302 [2024-11-06 14:34:46.674574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:98432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.302 [2024-11-06 14:34:46.674601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.302 [2024-11-06 14:34:46.674635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:98440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.302 [2024-11-06 14:34:46.675068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.302 [2024-11-06 14:34:46.675106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.302 [2024-11-06 14:34:46.675485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.302 [2024-11-06 14:34:46.675524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:98064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.302 [2024-11-06 14:34:46.675552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.302 [2024-11-06 14:34:46.675938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:98072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.302 [2024-11-06 14:34:46.676018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.302 [2024-11-06 14:34:46.676415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:98080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.302 [2024-11-06 14:34:46.676475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.302 [2024-11-06 14:34:46.676509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:98088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.302 [2024-11-06 14:34:46.676889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.302 [2024-11-06 14:34:46.676970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:98096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.302 [2024-11-06 14:34:46.677325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.302 [2024-11-06 14:34:46.677375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:98104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.302 [2024-11-06 14:34:46.677412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.302 [2024-11-06 14:34:46.677748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:98112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.302 [2024-11-06 14:34:46.678222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.302 [2024-11-06 14:34:46.678533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:98120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.302 [2024-11-06 14:34:46.678597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.302 [2024-11-06 14:34:46.679024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:98128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.302 [2024-11-06 14:34:46.679086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.302 [2024-11-06 14:34:46.679535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:98136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.302 [2024-11-06 14:34:46.679596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.302 [2024-11-06 14:34:46.680047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:98144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.302 [2024-11-06 14:34:46.680114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.302 [2024-11-06 14:34:46.680567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:98152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.302 [2024-11-06 14:34:46.680632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.302 [2024-11-06 14:34:46.681085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:98160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.302 [2024-11-06 14:34:46.681186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.302 [2024-11-06 14:34:46.681615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:98168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.302 [2024-11-06 14:34:46.681677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.302 [2024-11-06 14:34:46.682100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:98176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.302 [2024-11-06 14:34:46.682162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.302 [2024-11-06 14:34:46.682198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:98184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.302 [2024-11-06 14:34:46.682234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.302 [2024-11-06 14:34:46.682271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:98456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.302 [2024-11-06 14:34:46.682307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.302 [2024-11-06 14:34:46.682342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:98464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.302 [2024-11-06 14:34:46.682744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.302 [2024-11-06 14:34:46.682783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:98472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.302 [2024-11-06 14:34:46.683279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.302 [2024-11-06 14:34:46.683319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:98480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.302 [2024-11-06 14:34:46.683682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.302 [2024-11-06 14:34:46.683720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:98488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.302 [2024-11-06 14:34:46.683747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.302 [2024-11-06 14:34:46.683781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:98496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.303 [2024-11-06 14:34:46.684209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.303 [2024-11-06 14:34:46.684248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.303 [2024-11-06 14:34:46.684673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.303 [2024-11-06 14:34:46.684711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:98512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.303 [2024-11-06 14:34:46.684926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.303 [2024-11-06 14:34:46.685010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:98520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.303 [2024-11-06 14:34:46.685049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.303 [2024-11-06 14:34:46.685085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:98528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.303 [2024-11-06 14:34:46.685120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.303 [2024-11-06 14:34:46.685154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:98536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.303 14:34:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:30:19.303 [2024-11-06 14:34:46.685188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.303 [2024-11-06 14:34:46.685223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:98544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.303 [2024-11-06 14:34:46.685258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.303 [2024-11-06 14:34:46.685292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:98552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.303 [2024-11-06 14:34:46.685319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.303 [2024-11-06 14:34:46.685353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:98560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.303 [2024-11-06 14:34:46.685386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.303 [2024-11-06 14:34:46.685421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:98568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.303 [2024-11-06 14:34:46.685454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.303 [2024-11-06 14:34:46.685488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.303 [2024-11-06 14:34:46.685514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.303 [2024-11-06 14:34:46.685548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:98584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.303 [2024-11-06 14:34:46.685583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.303 [2024-11-06 14:34:46.685616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:98592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.303 [2024-11-06 14:34:46.685651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.303 [2024-11-06 14:34:46.685685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:98600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.303 [2024-11-06 14:34:46.686230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.303 [2024-11-06 14:34:46.686600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:98608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.303 [2024-11-06 14:34:46.686655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.303 [2024-11-06 14:34:46.686689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:98616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.303 [2024-11-06 14:34:46.687068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.303 [2024-11-06 14:34:46.687109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:98624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.303 [2024-11-06 14:34:46.687137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.303 [2024-11-06 14:34:46.687518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.303 [2024-11-06 14:34:46.687574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.303 [2024-11-06 14:34:46.687934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:98640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.303 [2024-11-06 14:34:46.688014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.303 [2024-11-06 14:34:46.688428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:98648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.303 [2024-11-06 14:34:46.688491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.303 [2024-11-06 14:34:46.688739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:98656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.303 [2024-11-06 14:34:46.688756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.303 [2024-11-06 14:34:46.688767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:98664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.303 [2024-11-06 14:34:46.688776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.303 [2024-11-06 14:34:46.688787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:98672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.303 [2024-11-06 14:34:46.688796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.303 [2024-11-06 14:34:46.688806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:98680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.303 [2024-11-06 14:34:46.688814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.303 [2024-11-06 14:34:46.688825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.303 [2024-11-06 14:34:46.688834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.303 [2024-11-06 14:34:46.688845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.303 [2024-11-06 14:34:46.688854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.303 [2024-11-06 14:34:46.688865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:98704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.303 [2024-11-06 14:34:46.688874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.303 [2024-11-06 14:34:46.688885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:98712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.303 [2024-11-06 14:34:46.688894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.303 [2024-11-06 14:34:46.688904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.303 [2024-11-06 14:34:46.689038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.303 [2024-11-06 14:34:46.689053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:98728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.303 [2024-11-06 14:34:46.689154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.303 [2024-11-06 14:34:46.689166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.303 [2024-11-06 14:34:46.689175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.303 [2024-11-06 14:34:46.689276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:98744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.303 [2024-11-06 14:34:46.689295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.303 [2024-11-06 14:34:46.689305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.303 [2024-11-06 14:34:46.689315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.303 [2024-11-06 14:34:46.689326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:98760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.303 [2024-11-06 14:34:46.689448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.303 [2024-11-06 14:34:46.689460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:98768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.303 [2024-11-06 14:34:46.689470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.303 [2024-11-06 14:34:46.689556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:98776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.303 [2024-11-06 14:34:46.689571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.303 [2024-11-06 14:34:46.689582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.303 [2024-11-06 14:34:46.689592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.303 [2024-11-06 14:34:46.689604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.303 [2024-11-06 14:34:46.689612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.303 [2024-11-06 14:34:46.689710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:98800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.303 [2024-11-06 14:34:46.689729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.303 [2024-11-06 14:34:46.689740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:98808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.303 [2024-11-06 14:34:46.689748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.304 [2024-11-06 14:34:46.689759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:98816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.304 [2024-11-06 14:34:46.689838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.304 [2024-11-06 14:34:46.689849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.304 [2024-11-06 14:34:46.689857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.304 [2024-11-06 14:34:46.689869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.304 [2024-11-06 14:34:46.689877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.304 [2024-11-06 14:34:46.689887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:98840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.304 [2024-11-06 14:34:46.689896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.304 [2024-11-06 14:34:46.690020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.304 [2024-11-06 14:34:46.690038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.304 [2024-11-06 14:34:46.690049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:98856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.304 [2024-11-06 14:34:46.690119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.304 [2024-11-06 14:34:46.690131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:98864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.304 [2024-11-06 14:34:46.690141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.304 [2024-11-06 14:34:46.690152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:98872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.304 [2024-11-06 14:34:46.690160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.304 [2024-11-06 14:34:46.690170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:98192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.304 [2024-11-06 14:34:46.690180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.304 [2024-11-06 14:34:46.690190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:98200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.304 [2024-11-06 14:34:46.690201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.304 [2024-11-06 14:34:46.690212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:98208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.304 [2024-11-06 14:34:46.690220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.304 [2024-11-06 14:34:46.690231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:98216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.304 [2024-11-06 14:34:46.690240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.304 [2024-11-06 14:34:46.690456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:98224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.304 [2024-11-06 14:34:46.690473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.304 [2024-11-06 14:34:46.690485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:98232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.304 [2024-11-06 14:34:46.690494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.304 [2024-11-06 14:34:46.690505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:98240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.304 [2024-11-06 14:34:46.690513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.304 [2024-11-06 14:34:46.690524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:98880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.304 [2024-11-06 14:34:46.690540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.304 [2024-11-06 14:34:46.690551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:98248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.304 [2024-11-06 14:34:46.690672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.304 [2024-11-06 14:34:46.690689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:98256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.304 [2024-11-06 14:34:46.690760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.304 [2024-11-06 14:34:46.690778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:98264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.304 [2024-11-06 14:34:46.690787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.304 [2024-11-06 14:34:46.690798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:98272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.304 [2024-11-06 14:34:46.690809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.304 [2024-11-06 14:34:46.690820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:98280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.304 [2024-11-06 14:34:46.690829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.304 [2024-11-06 14:34:46.690926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:98288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.304 [2024-11-06 14:34:46.690943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.304 [2024-11-06 14:34:46.690966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:98296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.304 [2024-11-06 14:34:46.690975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.304 [2024-11-06 14:34:46.691092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:98304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.304 [2024-11-06 14:34:46.691109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.304 [2024-11-06 14:34:46.691121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:98312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.304 [2024-11-06 14:34:46.691130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.304 [2024-11-06 14:34:46.691141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:98320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.304 [2024-11-06 14:34:46.691150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.304 [2024-11-06 14:34:46.691162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:98328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.304 [2024-11-06 14:34:46.691170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.304 [2024-11-06 14:34:46.691182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:98336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.304 [2024-11-06 14:34:46.691190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.304 [2024-11-06 14:34:46.691200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:98344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.304 [2024-11-06 14:34:46.691208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.304 [2024-11-06 14:34:46.691219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:98352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.304 [2024-11-06 14:34:46.691228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.304 [2024-11-06 14:34:46.691238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:98360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.304 [2024-11-06 14:34:46.691248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.304 [2024-11-06 14:34:46.691258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:98368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.304 [2024-11-06 14:34:46.691392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.304 [2024-11-06 14:34:46.691493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:98888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.304 [2024-11-06 14:34:46.691509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.304 [2024-11-06 14:34:46.691521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:98896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.304 [2024-11-06 14:34:46.691532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.304 [2024-11-06 14:34:46.691543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:98904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.304 [2024-11-06 14:34:46.691680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.304 [2024-11-06 14:34:46.691769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:98912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.304 [2024-11-06 14:34:46.691784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.304 [2024-11-06 14:34:46.691795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:98920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.304 [2024-11-06 14:34:46.691804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.304 [2024-11-06 14:34:46.691816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:98928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.304 [2024-11-06 14:34:46.691824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.304 [2024-11-06 14:34:46.691835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:98936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.304 [2024-11-06 14:34:46.691843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.304 [2024-11-06 14:34:46.691967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:98944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.304 [2024-11-06 14:34:46.691978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.305 [2024-11-06 14:34:46.691989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:98952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.305 [2024-11-06 14:34:46.692061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.305 [2024-11-06 14:34:46.692078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:98960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.305 [2024-11-06 14:34:46.692087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.305 [2024-11-06 14:34:46.692098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.305 [2024-11-06 14:34:46.692108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.305 [2024-11-06 14:34:46.692119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:98976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.305 [2024-11-06 14:34:46.692128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.305 [2024-11-06 14:34:46.692139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:98984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.305 [2024-11-06 14:34:46.692243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.305 [2024-11-06 14:34:46.692254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:98992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.305 [2024-11-06 14:34:46.692264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.305 [2024-11-06 14:34:46.692274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.305 [2024-11-06 14:34:46.692401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.305 [2024-11-06 14:34:46.692509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:99008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.305 [2024-11-06 14:34:46.692526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.305 [2024-11-06 14:34:46.692537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.305 [2024-11-06 14:34:46.692546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.305 [2024-11-06 14:34:46.692656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:99024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.305 [2024-11-06 14:34:46.692668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.305 [2024-11-06 14:34:46.692679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:99032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.305 [2024-11-06 14:34:46.692688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.305 [2024-11-06 14:34:46.692777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:99040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.305 [2024-11-06 14:34:46.692787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.305 [2024-11-06 14:34:46.692798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:99048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.305 [2024-11-06 14:34:46.692806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.305 [2024-11-06 14:34:46.692817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:99056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.305 [2024-11-06 14:34:46.692912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.305 [2024-11-06 14:34:46.692930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.305 [2024-11-06 14:34:46.692940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.305 [2024-11-06 14:34:46.692961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:99072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.305 [2024-11-06 14:34:46.693072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.305 [2024-11-06 14:34:46.693101] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:19.305 [2024-11-06 14:34:46.693175] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:19.305 [2024-11-06 14:34:46.693185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99080 len:8 PRP1 0x0 PRP2 0x0 00:30:19.305 [2024-11-06 14:34:46.693195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.305 [2024-11-06 14:34:46.693637] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:30:19.305 [2024-11-06 14:34:46.693933] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:30:19.305 [2024-11-06 14:34:46.694124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.305 [2024-11-06 14:34:46.694207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.3, port=4420 00:30:19.305 [2024-11-06 14:34:46.694230] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:30:19.305 [2024-11-06 14:34:46.694248] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:30:19.305 [2024-11-06 14:34:46.694266] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:30:19.305 [2024-11-06 14:34:46.694274] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:30:19.305 [2024-11-06 14:34:46.694286] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:30:19.305 [2024-11-06 14:34:46.694398] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:30:19.305 [2024-11-06 14:34:46.694417] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:30:20.241 6129.00 IOPS, 23.94 MiB/s [2024-11-06T14:34:47.876Z] [2024-11-06 14:34:47.694573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.241 [2024-11-06 14:34:47.694609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.3, port=4420 00:30:20.241 [2024-11-06 14:34:47.694619] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:30:20.241 [2024-11-06 14:34:47.694635] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:30:20.241 [2024-11-06 14:34:47.694648] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:30:20.241 [2024-11-06 14:34:47.694656] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:30:20.241 [2024-11-06 14:34:47.694665] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:30:20.241 [2024-11-06 14:34:47.694674] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:30:20.241 [2024-11-06 14:34:47.694682] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:30:21.176 4086.00 IOPS, 15.96 MiB/s [2024-11-06T14:34:48.811Z] [2024-11-06 14:34:48.694750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.176 [2024-11-06 14:34:48.694781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.3, port=4420 00:30:21.176 [2024-11-06 14:34:48.694790] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:30:21.176 [2024-11-06 14:34:48.694805] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:30:21.176 [2024-11-06 14:34:48.694817] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:30:21.176 [2024-11-06 14:34:48.694823] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:30:21.176 [2024-11-06 14:34:48.694831] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:30:21.176 [2024-11-06 14:34:48.694839] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:30:21.176 [2024-11-06 14:34:48.694846] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:30:22.125 3064.50 IOPS, 11.97 MiB/s [2024-11-06T14:34:49.760Z] 14:34:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:30:22.125 [2024-11-06 14:34:49.697989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.125 [2024-11-06 14:34:49.698038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.3, port=4420 00:30:22.125 [2024-11-06 14:34:49.698051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:30:22.125 [2024-11-06 14:34:49.698452] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:30:22.125 [2024-11-06 14:34:49.698785] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:30:22.125 [2024-11-06 14:34:49.698805] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:30:22.125 [2024-11-06 14:34:49.698815] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:30:22.125 [2024-11-06 14:34:49.698826] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:30:22.125 [2024-11-06 14:34:49.698835] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:30:22.387 [2024-11-06 14:34:49.878928] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:30:22.387 14:34:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 106639 00:30:23.212 2451.60 IOPS, 9.58 MiB/s [2024-11-06T14:34:50.847Z] [2024-11-06 14:34:50.723184] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 00:30:25.081 3685.17 IOPS, 14.40 MiB/s [2024-11-06T14:34:53.689Z] 4855.00 IOPS, 18.96 MiB/s [2024-11-06T14:34:54.623Z] 5659.50 IOPS, 22.11 MiB/s [2024-11-06T14:34:55.998Z] 6313.11 IOPS, 24.66 MiB/s [2024-11-06T14:34:55.998Z] 6847.70 IOPS, 26.75 MiB/s 00:30:28.363 Latency(us) 00:30:28.363 [2024-11-06T14:34:55.998Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:28.363 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:30:28.363 Verification LBA range: start 0x0 length 0x4000 00:30:28.363 NVMe0n1 : 10.01 6856.44 26.78 4576.29 0.00 11166.25 677.42 3032804.43 00:30:28.363 [2024-11-06T14:34:55.998Z] =================================================================================================================== 00:30:28.363 [2024-11-06T14:34:55.998Z] Total : 6856.44 26.78 4576.29 0.00 11166.25 0.00 3032804.43 00:30:28.363 { 00:30:28.363 "results": [ 00:30:28.363 { 00:30:28.363 "job": "NVMe0n1", 00:30:28.363 "core_mask": "0x4", 00:30:28.363 "workload": "verify", 00:30:28.363 "status": "finished", 00:30:28.363 "verify_range": { 00:30:28.363 "start": 0, 00:30:28.363 "length": 16384 00:30:28.363 }, 00:30:28.363 "queue_depth": 128, 00:30:28.363 "io_size": 4096, 00:30:28.363 "runtime": 10.005918, 00:30:28.363 "iops": 6856.442357412883, 00:30:28.363 "mibps": 26.782977958644075, 00:30:28.363 "io_failed": 45790, 00:30:28.363 "io_timeout": 0, 00:30:28.363 "avg_latency_us": 11166.254377067313, 00:30:28.363 "min_latency_us": 677.4153846153846, 00:30:28.363 "max_latency_us": 3032804.4307692307 00:30:28.363 } 00:30:28.363 ], 00:30:28.363 "core_count": 1 00:30:28.363 } 00:30:28.363 14:34:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 106475 00:30:28.363 14:34:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # '[' -z 106475 ']' 00:30:28.363 14:34:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # kill -0 106475 00:30:28.363 14:34:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # uname 00:30:28.363 14:34:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:28.363 14:34:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 106475 00:30:28.363 killing process with pid 106475 00:30:28.363 Received shutdown signal, test time was about 10.000000 seconds 00:30:28.363 00:30:28.363 Latency(us) 00:30:28.363 [2024-11-06T14:34:55.998Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:28.363 [2024-11-06T14:34:55.998Z] =================================================================================================================== 00:30:28.363 [2024-11-06T14:34:55.998Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:28.363 14:34:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:30:28.363 14:34:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:30:28.363 14:34:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@970 -- # echo 'killing process with pid 106475' 00:30:28.363 14:34:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@971 -- # kill 106475 00:30:28.363 14:34:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@976 -- # wait 106475 00:30:28.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:28.622 14:34:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=106766 00:30:28.622 14:34:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 106766 /var/tmp/bdevperf.sock 00:30:28.622 14:34:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:30:28.622 14:34:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@833 -- # '[' -z 106766 ']' 00:30:28.622 14:34:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:28.622 14:34:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:28.622 14:34:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:28.622 14:34:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:28.622 14:34:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:30:28.622 [2024-11-06 14:34:56.205254] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:30:28.622 [2024-11-06 14:34:56.205367] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106766 ] 00:30:28.881 [2024-11-06 14:34:56.356487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:28.881 [2024-11-06 14:34:56.440811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:29.448 14:34:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:29.448 14:34:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@866 -- # return 0 00:30:29.448 14:34:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=106794 00:30:29.448 14:34:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 106766 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:30:29.448 14:34:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:30:29.706 14:34:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:30:29.964 NVMe0n1 00:30:29.964 14:34:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=106842 00:30:29.964 14:34:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:30:29.964 14:34:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:30.222 Running I/O for 10 seconds... 00:30:31.161 14:34:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:30:31.161 22206.00 IOPS, 86.74 MiB/s [2024-11-06T14:34:58.796Z] [2024-11-06 14:34:58.728341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.161 [2024-11-06 14:34:58.728376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.161 [2024-11-06 14:34:58.728383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.161 [2024-11-06 14:34:58.728389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.161 [2024-11-06 14:34:58.728394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.161 [2024-11-06 14:34:58.728400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.161 [2024-11-06 14:34:58.728405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.161 [2024-11-06 14:34:58.728411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.161 [2024-11-06 14:34:58.728416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.161 [2024-11-06 14:34:58.728422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.161 [2024-11-06 14:34:58.728427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.161 [2024-11-06 14:34:58.728433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.161 [2024-11-06 14:34:58.728439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.161 [2024-11-06 14:34:58.728447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.161 [2024-11-06 14:34:58.728462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.161 [2024-11-06 14:34:58.728468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.161 [2024-11-06 14:34:58.728473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.161 [2024-11-06 14:34:58.728478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.161 [2024-11-06 14:34:58.728484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.161 [2024-11-06 14:34:58.728489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.161 [2024-11-06 14:34:58.728494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.161 [2024-11-06 14:34:58.728500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.161 [2024-11-06 14:34:58.728505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.161 [2024-11-06 14:34:58.728511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.161 [2024-11-06 14:34:58.728516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.161 [2024-11-06 14:34:58.728521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.161 [2024-11-06 14:34:58.728527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.161 [2024-11-06 14:34:58.728533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.161 [2024-11-06 14:34:58.728538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.161 [2024-11-06 14:34:58.728544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.161 [2024-11-06 14:34:58.728550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.161 [2024-11-06 14:34:58.728555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.161 [2024-11-06 14:34:58.728560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.161 [2024-11-06 14:34:58.728566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.161 [2024-11-06 14:34:58.728572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.161 [2024-11-06 14:34:58.728577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.161 [2024-11-06 14:34:58.728583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.161 [2024-11-06 14:34:58.728588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.161 [2024-11-06 14:34:58.728594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.161 [2024-11-06 14:34:58.728599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.161 [2024-11-06 14:34:58.728605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.161 [2024-11-06 14:34:58.728610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.161 [2024-11-06 14:34:58.728615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.161 [2024-11-06 14:34:58.728620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.161 [2024-11-06 14:34:58.728626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.161 [2024-11-06 14:34:58.728632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.161 [2024-11-06 14:34:58.728638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.161 [2024-11-06 14:34:58.728643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.161 [2024-11-06 14:34:58.728649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.161 [2024-11-06 14:34:58.728654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.161 [2024-11-06 14:34:58.728659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.161 [2024-11-06 14:34:58.728665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.161 [2024-11-06 14:34:58.728670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.161 [2024-11-06 14:34:58.728675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.161 [2024-11-06 14:34:58.728681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.161 [2024-11-06 14:34:58.728686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.161 [2024-11-06 14:34:58.728692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.161 [2024-11-06 14:34:58.728697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.161 [2024-11-06 14:34:58.728702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.161 [2024-11-06 14:34:58.728708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.161 [2024-11-06 14:34:58.728713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.161 [2024-11-06 14:34:58.728718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.161 [2024-11-06 14:34:58.728723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.161 [2024-11-06 14:34:58.728728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.161 [2024-11-06 14:34:58.728733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.161 [2024-11-06 14:34:58.728738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.161 [2024-11-06 14:34:58.728744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.161 [2024-11-06 14:34:58.728749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.161 [2024-11-06 14:34:58.728754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.161 [2024-11-06 14:34:58.728759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.161 [2024-11-06 14:34:58.728765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.161 [2024-11-06 14:34:58.728770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.161 [2024-11-06 14:34:58.728775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.161 [2024-11-06 14:34:58.728780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.161 [2024-11-06 14:34:58.728785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.162 [2024-11-06 14:34:58.728791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.162 [2024-11-06 14:34:58.728796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.162 [2024-11-06 14:34:58.728802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.162 [2024-11-06 14:34:58.728808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.162 [2024-11-06 14:34:58.728813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.162 [2024-11-06 14:34:58.728818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.162 [2024-11-06 14:34:58.728823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.162 [2024-11-06 14:34:58.728828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.162 [2024-11-06 14:34:58.728833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.162 [2024-11-06 14:34:58.728839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.162 [2024-11-06 14:34:58.728844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.162 [2024-11-06 14:34:58.728849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.162 [2024-11-06 14:34:58.728854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.162 [2024-11-06 14:34:58.728860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.162 [2024-11-06 14:34:58.728865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.162 [2024-11-06 14:34:58.728870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.162 [2024-11-06 14:34:58.728875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.162 [2024-11-06 14:34:58.728880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.162 [2024-11-06 14:34:58.728885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.162 [2024-11-06 14:34:58.728891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.162 [2024-11-06 14:34:58.728896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.162 [2024-11-06 14:34:58.728901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.162 [2024-11-06 14:34:58.728906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.162 [2024-11-06 14:34:58.728911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.162 [2024-11-06 14:34:58.728917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.162 [2024-11-06 14:34:58.728922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.162 [2024-11-06 14:34:58.728927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.162 [2024-11-06 14:34:58.728932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.162 [2024-11-06 14:34:58.728938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.162 [2024-11-06 14:34:58.728943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.162 [2024-11-06 14:34:58.728958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.162 [2024-11-06 14:34:58.728964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.162 [2024-11-06 14:34:58.728969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.162 [2024-11-06 14:34:58.728974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.162 [2024-11-06 14:34:58.728980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.162 [2024-11-06 14:34:58.728986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.162 [2024-11-06 14:34:58.728991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.162 [2024-11-06 14:34:58.728996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.162 [2024-11-06 14:34:58.729002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.162 [2024-11-06 14:34:58.729007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.162 [2024-11-06 14:34:58.729016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.162 [2024-11-06 14:34:58.729022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.162 [2024-11-06 14:34:58.729027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.162 [2024-11-06 14:34:58.729032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.162 [2024-11-06 14:34:58.729038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.162 [2024-11-06 14:34:58.729043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.162 [2024-11-06 14:34:58.729048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.162 [2024-11-06 14:34:58.729054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.162 [2024-11-06 14:34:58.729059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.162 [2024-11-06 14:34:58.729064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.162 [2024-11-06 14:34:58.729069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:30:31.162 [2024-11-06 14:34:58.729602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:3256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.162 [2024-11-06 14:34:58.729637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.162 [2024-11-06 14:34:58.729654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:87304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.162 [2024-11-06 14:34:58.729662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.162 [2024-11-06 14:34:58.729671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:71792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.162 [2024-11-06 14:34:58.729679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.162 [2024-11-06 14:34:58.729688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:8384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.162 [2024-11-06 14:34:58.729696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.162 [2024-11-06 14:34:58.729703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:54880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.162 [2024-11-06 14:34:58.729710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.162 [2024-11-06 14:34:58.729718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:90120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.162 [2024-11-06 14:34:58.729724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.162 [2024-11-06 14:34:58.729732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:66656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.162 [2024-11-06 14:34:58.729738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.162 [2024-11-06 14:34:58.729746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:122472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.162 [2024-11-06 14:34:58.729753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.162 [2024-11-06 14:34:58.729761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.162 [2024-11-06 14:34:58.729767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.162 [2024-11-06 14:34:58.729776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.162 [2024-11-06 14:34:58.729782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.162 [2024-11-06 14:34:58.729790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:60392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.162 [2024-11-06 14:34:58.729796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.162 [2024-11-06 14:34:58.729804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:67536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.162 [2024-11-06 14:34:58.729811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.162 [2024-11-06 14:34:58.729819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:47336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.162 [2024-11-06 14:34:58.729826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.162 [2024-11-06 14:34:58.729834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:115416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.162 [2024-11-06 14:34:58.729841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.162 [2024-11-06 14:34:58.729848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:49608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.162 [2024-11-06 14:34:58.729855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.162 [2024-11-06 14:34:58.729863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:96760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.163 [2024-11-06 14:34:58.729870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.163 [2024-11-06 14:34:58.729878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:117544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.163 [2024-11-06 14:34:58.729885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.163 [2024-11-06 14:34:58.729893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:84168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.163 [2024-11-06 14:34:58.729899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.163 [2024-11-06 14:34:58.729907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:69672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.163 [2024-11-06 14:34:58.729914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.163 [2024-11-06 14:34:58.729921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:19840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.163 [2024-11-06 14:34:58.729928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.163 [2024-11-06 14:34:58.729935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:6360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.163 [2024-11-06 14:34:58.729942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.163 [2024-11-06 14:34:58.729960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:35760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.163 [2024-11-06 14:34:58.729967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.163 [2024-11-06 14:34:58.729976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:92352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.163 [2024-11-06 14:34:58.729982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.163 [2024-11-06 14:34:58.729990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:116352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.163 [2024-11-06 14:34:58.729997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.163 [2024-11-06 14:34:58.730005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:75120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.163 [2024-11-06 14:34:58.730012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.163 [2024-11-06 14:34:58.730020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:127840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.163 [2024-11-06 14:34:58.730026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.163 [2024-11-06 14:34:58.730035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:13520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.163 [2024-11-06 14:34:58.730041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.163 [2024-11-06 14:34:58.730049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:42112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.163 [2024-11-06 14:34:58.730056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.163 [2024-11-06 14:34:58.730064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:111568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.163 [2024-11-06 14:34:58.730070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.163 [2024-11-06 14:34:58.730078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:27448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.163 [2024-11-06 14:34:58.730085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.163 [2024-11-06 14:34:58.730093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:91384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.163 [2024-11-06 14:34:58.730100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.163 [2024-11-06 14:34:58.730108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:53920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.163 [2024-11-06 14:34:58.730115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.163 [2024-11-06 14:34:58.730122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:54928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.163 [2024-11-06 14:34:58.730129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.163 [2024-11-06 14:34:58.730137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:4608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.163 [2024-11-06 14:34:58.730144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.163 [2024-11-06 14:34:58.730152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:65904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.163 [2024-11-06 14:34:58.730159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.163 [2024-11-06 14:34:58.730167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:14184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.163 [2024-11-06 14:34:58.730173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.163 [2024-11-06 14:34:58.730180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:122752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.163 [2024-11-06 14:34:58.730187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.163 [2024-11-06 14:34:58.730195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:117264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.163 [2024-11-06 14:34:58.730201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.163 [2024-11-06 14:34:58.730210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:63992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.163 [2024-11-06 14:34:58.730216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.163 [2024-11-06 14:34:58.730225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:115952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.163 [2024-11-06 14:34:58.730231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.163 [2024-11-06 14:34:58.730239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.163 [2024-11-06 14:34:58.730246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.163 [2024-11-06 14:34:58.730254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:125248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.163 [2024-11-06 14:34:58.730260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.163 [2024-11-06 14:34:58.730268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:120256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.163 [2024-11-06 14:34:58.730274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.163 [2024-11-06 14:34:58.730282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:100560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.163 [2024-11-06 14:34:58.730288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.163 [2024-11-06 14:34:58.730296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:84336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.163 [2024-11-06 14:34:58.730302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.163 [2024-11-06 14:34:58.730310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:51416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.163 [2024-11-06 14:34:58.730316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.163 [2024-11-06 14:34:58.730325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:58224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.163 [2024-11-06 14:34:58.730339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.163 [2024-11-06 14:34:58.730347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:56352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.163 [2024-11-06 14:34:58.730361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.163 [2024-11-06 14:34:58.730368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:121816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.163 [2024-11-06 14:34:58.730375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.163 [2024-11-06 14:34:58.730383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:112056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.163 [2024-11-06 14:34:58.730390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.163 [2024-11-06 14:34:58.730397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:67912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.163 [2024-11-06 14:34:58.730404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.163 [2024-11-06 14:34:58.730411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:78696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.163 [2024-11-06 14:34:58.730418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.163 [2024-11-06 14:34:58.730426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:14160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.163 [2024-11-06 14:34:58.730433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.163 [2024-11-06 14:34:58.730441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:49200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.163 [2024-11-06 14:34:58.730449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.163 [2024-11-06 14:34:58.730457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:120056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.163 [2024-11-06 14:34:58.730464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.164 [2024-11-06 14:34:58.730472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:20168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.164 [2024-11-06 14:34:58.730479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.164 [2024-11-06 14:34:58.730487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:121672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.164 [2024-11-06 14:34:58.730493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.164 [2024-11-06 14:34:58.730501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:27672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.164 [2024-11-06 14:34:58.730507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.164 [2024-11-06 14:34:58.730519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:84920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.164 [2024-11-06 14:34:58.730525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.164 [2024-11-06 14:34:58.730533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:117192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.164 [2024-11-06 14:34:58.730539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.164 [2024-11-06 14:34:58.730547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.164 [2024-11-06 14:34:58.730554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.164 [2024-11-06 14:34:58.730561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:90296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.164 [2024-11-06 14:34:58.730567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.164 [2024-11-06 14:34:58.730575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:121400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.164 [2024-11-06 14:34:58.730582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.164 [2024-11-06 14:34:58.730589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:60288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.164 [2024-11-06 14:34:58.730595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.164 [2024-11-06 14:34:58.730603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:18456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.164 [2024-11-06 14:34:58.730612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.164 [2024-11-06 14:34:58.730621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.164 [2024-11-06 14:34:58.730628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.164 [2024-11-06 14:34:58.730636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:83112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.164 [2024-11-06 14:34:58.730642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.164 [2024-11-06 14:34:58.730650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:104072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.164 [2024-11-06 14:34:58.730656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.164 [2024-11-06 14:34:58.730664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:77136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.164 [2024-11-06 14:34:58.730671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.164 [2024-11-06 14:34:58.730679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:94992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.164 [2024-11-06 14:34:58.730685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.164 [2024-11-06 14:34:58.730693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:112328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.164 [2024-11-06 14:34:58.730699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.164 [2024-11-06 14:34:58.730707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:71920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.164 [2024-11-06 14:34:58.730713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.164 [2024-11-06 14:34:58.730721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:95200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.164 [2024-11-06 14:34:58.730727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.164 [2024-11-06 14:34:58.730737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:114920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.164 [2024-11-06 14:34:58.730743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.164 [2024-11-06 14:34:58.730751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:78784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.164 [2024-11-06 14:34:58.731301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.164 [2024-11-06 14:34:58.731320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:80984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.164 [2024-11-06 14:34:58.731328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.164 [2024-11-06 14:34:58.731336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:6320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.164 [2024-11-06 14:34:58.731343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.164 [2024-11-06 14:34:58.731351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:112600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.164 [2024-11-06 14:34:58.731357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.164 [2024-11-06 14:34:58.731481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:116296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.164 [2024-11-06 14:34:58.731490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.164 [2024-11-06 14:34:58.731498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:87488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.164 [2024-11-06 14:34:58.731721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.164 [2024-11-06 14:34:58.731731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:18496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.164 [2024-11-06 14:34:58.731739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.164 [2024-11-06 14:34:58.731748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:98464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.164 [2024-11-06 14:34:58.731755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.164 [2024-11-06 14:34:58.731763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.164 [2024-11-06 14:34:58.731893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.164 [2024-11-06 14:34:58.731977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:79304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.164 [2024-11-06 14:34:58.731985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.164 [2024-11-06 14:34:58.731993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:52864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.164 [2024-11-06 14:34:58.732000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.164 [2024-11-06 14:34:58.732008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.164 [2024-11-06 14:34:58.732015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.164 [2024-11-06 14:34:58.732023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:40040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.164 [2024-11-06 14:34:58.732030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.164 [2024-11-06 14:34:58.732039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:14464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.164 [2024-11-06 14:34:58.732045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.164 [2024-11-06 14:34:58.732053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:59104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.164 [2024-11-06 14:34:58.732060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.164 [2024-11-06 14:34:58.732068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:126136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.164 [2024-11-06 14:34:58.732074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.164 [2024-11-06 14:34:58.732082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:60120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.164 [2024-11-06 14:34:58.732089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.164 [2024-11-06 14:34:58.732100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.164 [2024-11-06 14:34:58.732107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.164 [2024-11-06 14:34:58.732115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:101464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.164 [2024-11-06 14:34:58.732122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.164 [2024-11-06 14:34:58.732146] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:31.164 [2024-11-06 14:34:58.732155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103216 len:8 PRP1 0x0 PRP2 0x0 00:30:31.164 [2024-11-06 14:34:58.732162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.165 [2024-11-06 14:34:58.732173] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:31.165 [2024-11-06 14:34:58.732183] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:31.165 [2024-11-06 14:34:58.732191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42160 len:8 PRP1 0x0 PRP2 0x0 00:30:31.165 [2024-11-06 14:34:58.732198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.165 [2024-11-06 14:34:58.732208] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:31.165 [2024-11-06 14:34:58.732213] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:31.165 [2024-11-06 14:34:58.732219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20448 len:8 PRP1 0x0 PRP2 0x0 00:30:31.165 [2024-11-06 14:34:58.732226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.165 [2024-11-06 14:34:58.732232] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:31.165 [2024-11-06 14:34:58.732236] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:31.165 [2024-11-06 14:34:58.732242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:125272 len:8 PRP1 0x0 PRP2 0x0 00:30:31.165 [2024-11-06 14:34:58.732248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.165 [2024-11-06 14:34:58.732255] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:31.165 [2024-11-06 14:34:58.732260] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:31.165 [2024-11-06 14:34:58.732266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119256 len:8 PRP1 0x0 PRP2 0x0 00:30:31.165 [2024-11-06 14:34:58.732272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.165 [2024-11-06 14:34:58.732278] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:31.165 [2024-11-06 14:34:58.732283] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:31.165 [2024-11-06 14:34:58.732289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:125472 len:8 PRP1 0x0 PRP2 0x0 00:30:31.165 [2024-11-06 14:34:58.732295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.165 [2024-11-06 14:34:58.732301] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:31.165 [2024-11-06 14:34:58.732306] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:31.165 [2024-11-06 14:34:58.732312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:26152 len:8 PRP1 0x0 PRP2 0x0 00:30:31.165 [2024-11-06 14:34:58.732325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.165 [2024-11-06 14:34:58.732331] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:31.165 [2024-11-06 14:34:58.732339] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:31.165 [2024-11-06 14:34:58.732344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67544 len:8 PRP1 0x0 PRP2 0x0 00:30:31.165 [2024-11-06 14:34:58.732351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.165 [2024-11-06 14:34:58.732357] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:31.165 [2024-11-06 14:34:58.732361] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:31.165 [2024-11-06 14:34:58.732368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:41728 len:8 PRP1 0x0 PRP2 0x0 00:30:31.165 [2024-11-06 14:34:58.732374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.165 [2024-11-06 14:34:58.732386] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:31.165 [2024-11-06 14:34:58.732391] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:31.165 [2024-11-06 14:34:58.732396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33496 len:8 PRP1 0x0 PRP2 0x0 00:30:31.165 [2024-11-06 14:34:58.732402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.165 [2024-11-06 14:34:58.732411] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:31.165 [2024-11-06 14:34:58.732416] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:31.165 [2024-11-06 14:34:58.732422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:123504 len:8 PRP1 0x0 PRP2 0x0 00:30:31.165 [2024-11-06 14:34:58.732428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.165 [2024-11-06 14:34:58.732435] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:31.165 [2024-11-06 14:34:58.732439] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:31.165 [2024-11-06 14:34:58.732445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:115024 len:8 PRP1 0x0 PRP2 0x0 00:30:31.165 [2024-11-06 14:34:58.732451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.165 [2024-11-06 14:34:58.732457] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:31.165 [2024-11-06 14:34:58.732462] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:31.165 [2024-11-06 14:34:58.732468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:70800 len:8 PRP1 0x0 PRP2 0x0 00:30:31.165 [2024-11-06 14:34:58.732474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.165 [2024-11-06 14:34:58.732480] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:31.165 [2024-11-06 14:34:58.732485] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:31.165 [2024-11-06 14:34:58.732491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52736 len:8 PRP1 0x0 PRP2 0x0 00:30:31.165 [2024-11-06 14:34:58.732498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.165 [2024-11-06 14:34:58.732504] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:31.165 [2024-11-06 14:34:58.732509] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:31.165 [2024-11-06 14:34:58.732514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10832 len:8 PRP1 0x0 PRP2 0x0 00:30:31.165 [2024-11-06 14:34:58.732521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.165 [2024-11-06 14:34:58.732527] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:31.165 [2024-11-06 14:34:58.732534] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:31.165 [2024-11-06 14:34:58.732539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102056 len:8 PRP1 0x0 PRP2 0x0 00:30:31.165 [2024-11-06 14:34:58.732546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.165 [2024-11-06 14:34:58.732552] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:31.165 [2024-11-06 14:34:58.732556] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:31.165 [2024-11-06 14:34:58.732562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104392 len:8 PRP1 0x0 PRP2 0x0 00:30:31.165 [2024-11-06 14:34:58.732569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.165 [2024-11-06 14:34:58.732576] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:31.165 [2024-11-06 14:34:58.732580] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:31.165 [2024-11-06 14:34:58.732586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45672 len:8 PRP1 0x0 PRP2 0x0 00:30:31.165 [2024-11-06 14:34:58.732592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.165 [2024-11-06 14:34:58.732600] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:31.165 [2024-11-06 14:34:58.732605] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:31.165 [2024-11-06 14:34:58.732611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102104 len:8 PRP1 0x0 PRP2 0x0 00:30:31.165 [2024-11-06 14:34:58.732617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.165 [2024-11-06 14:34:58.732623] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:31.165 [2024-11-06 14:34:58.732628] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:31.165 [2024-11-06 14:34:58.732634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71776 len:8 PRP1 0x0 PRP2 0x0 00:30:31.165 [2024-11-06 14:34:58.732640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.166 [2024-11-06 14:34:58.732647] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:31.166 [2024-11-06 14:34:58.732652] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:31.166 [2024-11-06 14:34:58.732657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:53640 len:8 PRP1 0x0 PRP2 0x0 00:30:31.166 [2024-11-06 14:34:58.732663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.166 [2024-11-06 14:34:58.732669] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:31.166 [2024-11-06 14:34:58.732674] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:31.166 [2024-11-06 14:34:58.732681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11824 len:8 PRP1 0x0 PRP2 0x0 00:30:31.166 [2024-11-06 14:34:58.732687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.166 [2024-11-06 14:34:58.732693] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:31.166 [2024-11-06 14:34:58.732697] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:31.166 [2024-11-06 14:34:58.732703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40096 len:8 PRP1 0x0 PRP2 0x0 00:30:31.166 [2024-11-06 14:34:58.732709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.166 [2024-11-06 14:34:58.732716] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:31.166 [2024-11-06 14:34:58.732722] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:31.166 [2024-11-06 14:34:58.743282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99944 len:8 PRP1 0x0 PRP2 0x0 00:30:31.166 [2024-11-06 14:34:58.743345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.166 [2024-11-06 14:34:58.743376] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:31.166 [2024-11-06 14:34:58.743393] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:31.166 [2024-11-06 14:34:58.743415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76312 len:8 PRP1 0x0 PRP2 0x0 00:30:31.166 [2024-11-06 14:34:58.743435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.166 [2024-11-06 14:34:58.743453] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:31.166 [2024-11-06 14:34:58.743466] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:31.166 [2024-11-06 14:34:58.743483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108296 len:8 PRP1 0x0 PRP2 0x0 00:30:31.166 [2024-11-06 14:34:58.743501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.166 [2024-11-06 14:34:58.743520] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:31.166 [2024-11-06 14:34:58.743533] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:31.166 [2024-11-06 14:34:58.743549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109496 len:8 PRP1 0x0 PRP2 0x0 00:30:31.166 [2024-11-06 14:34:58.743566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.166 [2024-11-06 14:34:58.743584] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:31.166 [2024-11-06 14:34:58.743596] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:31.166 [2024-11-06 14:34:58.743611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20528 len:8 PRP1 0x0 PRP2 0x0 00:30:31.166 [2024-11-06 14:34:58.743629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.166 [2024-11-06 14:34:58.743646] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:31.166 [2024-11-06 14:34:58.743660] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:31.166 [2024-11-06 14:34:58.743676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73056 len:8 PRP1 0x0 PRP2 0x0 00:30:31.166 [2024-11-06 14:34:58.743693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.166 [2024-11-06 14:34:58.743710] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:31.166 [2024-11-06 14:34:58.743723] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:31.166 [2024-11-06 14:34:58.743739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55864 len:8 PRP1 0x0 PRP2 0x0 00:30:31.166 [2024-11-06 14:34:58.743756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.166 [2024-11-06 14:34:58.743774] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:31.166 [2024-11-06 14:34:58.743786] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:31.166 [2024-11-06 14:34:58.743802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118704 len:8 PRP1 0x0 PRP2 0x0 00:30:31.166 [2024-11-06 14:34:58.743819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.166 [2024-11-06 14:34:58.743837] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:31.166 [2024-11-06 14:34:58.743855] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:31.166 [2024-11-06 14:34:58.743871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:126120 len:8 PRP1 0x0 PRP2 0x0 00:30:31.166 [2024-11-06 14:34:58.743888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.166 [2024-11-06 14:34:58.743905] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:31.166 [2024-11-06 14:34:58.743919] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:31.166 [2024-11-06 14:34:58.743934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92832 len:8 PRP1 0x0 PRP2 0x0 00:30:31.166 [2024-11-06 14:34:58.743975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.166 [2024-11-06 14:34:58.743993] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:31.166 [2024-11-06 14:34:58.744006] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:31.166 [2024-11-06 14:34:58.744023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109672 len:8 PRP1 0x0 PRP2 0x0 00:30:31.166 [2024-11-06 14:34:58.744040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.166 [2024-11-06 14:34:58.744058] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:31.166 [2024-11-06 14:34:58.744071] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:31.166 [2024-11-06 14:34:58.744087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51120 len:8 PRP1 0x0 PRP2 0x0 00:30:31.166 [2024-11-06 14:34:58.744105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.166 [2024-11-06 14:34:58.744703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.166 [2024-11-06 14:34:58.744736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.166 [2024-11-06 14:34:58.744762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.166 [2024-11-06 14:34:58.744781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.166 [2024-11-06 14:34:58.744801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.166 [2024-11-06 14:34:58.744820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.166 [2024-11-06 14:34:58.744839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.166 [2024-11-06 14:34:58.744857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.166 [2024-11-06 14:34:58.744874] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:30:31.166 [2024-11-06 14:34:58.745391] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:30:31.166 [2024-11-06 14:34:58.745447] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:30:31.166 [2024-11-06 14:34:58.745602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.166 [2024-11-06 14:34:58.745637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.3, port=4420 00:30:31.166 [2024-11-06 14:34:58.745661] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:30:31.166 [2024-11-06 14:34:58.745694] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:30:31.166 [2024-11-06 14:34:58.745726] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:30:31.166 [2024-11-06 14:34:58.745744] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:30:31.166 [2024-11-06 14:34:58.745771] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:30:31.166 [2024-11-06 14:34:58.745793] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:30:31.166 [2024-11-06 14:34:58.745814] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:30:31.166 14:34:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 106842 00:30:33.035 12325.50 IOPS, 48.15 MiB/s [2024-11-06T14:35:00.928Z] 8217.00 IOPS, 32.10 MiB/s [2024-11-06T14:35:00.928Z] [2024-11-06 14:35:00.746007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.293 [2024-11-06 14:35:00.746052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.3, port=4420 00:30:33.293 [2024-11-06 14:35:00.746066] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:30:33.293 [2024-11-06 14:35:00.746088] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:30:33.293 [2024-11-06 14:35:00.746114] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:30:33.293 [2024-11-06 14:35:00.746122] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:30:33.293 [2024-11-06 14:35:00.746132] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:30:33.293 [2024-11-06 14:35:00.746142] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:30:33.293 [2024-11-06 14:35:00.746152] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:30:35.162 6162.75 IOPS, 24.07 MiB/s [2024-11-06T14:35:02.797Z] 4930.20 IOPS, 19.26 MiB/s [2024-11-06T14:35:02.797Z] [2024-11-06 14:35:02.746273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.162 [2024-11-06 14:35:02.746306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.3, port=4420 00:30:35.162 [2024-11-06 14:35:02.746318] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:30:35.162 [2024-11-06 14:35:02.746335] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:30:35.162 [2024-11-06 14:35:02.746349] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:30:35.162 [2024-11-06 14:35:02.746356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:30:35.162 [2024-11-06 14:35:02.746365] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:30:35.162 [2024-11-06 14:35:02.746374] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:30:35.162 [2024-11-06 14:35:02.746382] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:30:37.031 4108.50 IOPS, 16.05 MiB/s [2024-11-06T14:35:04.924Z] 3521.57 IOPS, 13.76 MiB/s [2024-11-06T14:35:04.924Z] [2024-11-06 14:35:04.746433] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:30:37.289 [2024-11-06 14:35:04.746461] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:30:37.289 [2024-11-06 14:35:04.746469] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:30:37.289 [2024-11-06 14:35:04.746477] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 00:30:37.289 [2024-11-06 14:35:04.746486] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:30:38.225 3081.38 IOPS, 12.04 MiB/s 00:30:38.225 Latency(us) 00:30:38.225 [2024-11-06T14:35:05.860Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:38.225 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:30:38.225 NVMe0n1 : 8.13 3032.62 11.85 15.75 0.00 41888.25 2003.89 7072241.82 00:30:38.225 [2024-11-06T14:35:05.860Z] =================================================================================================================== 00:30:38.225 [2024-11-06T14:35:05.860Z] Total : 3032.62 11.85 15.75 0.00 41888.25 2003.89 7072241.82 00:30:38.225 { 00:30:38.225 "results": [ 00:30:38.225 { 00:30:38.225 "job": "NVMe0n1", 00:30:38.225 "core_mask": "0x4", 00:30:38.225 "workload": "randread", 00:30:38.225 "status": "finished", 00:30:38.225 "queue_depth": 128, 00:30:38.225 "io_size": 4096, 00:30:38.225 "runtime": 8.128618, 00:30:38.225 "iops": 3032.618828932544, 00:30:38.225 "mibps": 11.84616730051775, 00:30:38.225 "io_failed": 128, 00:30:38.225 "io_timeout": 0, 00:30:38.225 "avg_latency_us": 41888.25178640722, 00:30:38.225 "min_latency_us": 2003.8892307692308, 00:30:38.225 "max_latency_us": 7072241.821538461 00:30:38.225 } 00:30:38.225 ], 00:30:38.225 "core_count": 1 00:30:38.225 } 00:30:38.225 14:35:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:30:38.225 Attaching 5 probes... 00:30:38.225 1285.060348: reset bdev controller NVMe0 00:30:38.225 1285.192765: reconnect bdev controller NVMe0 00:30:38.225 3285.583969: reconnect delay bdev controller NVMe0 00:30:38.225 3285.603437: reconnect bdev controller NVMe0 00:30:38.225 5285.895229: reconnect delay bdev controller NVMe0 00:30:38.225 5285.910001: reconnect bdev controller NVMe0 00:30:38.225 7286.108098: reconnect delay bdev controller NVMe0 00:30:38.225 7286.119625: reconnect bdev controller NVMe0 00:30:38.225 14:35:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:30:38.225 14:35:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:30:38.225 14:35:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 106794 00:30:38.225 14:35:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:30:38.225 14:35:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 106766 00:30:38.225 14:35:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # '[' -z 106766 ']' 00:30:38.225 14:35:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # kill -0 106766 00:30:38.225 14:35:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # uname 00:30:38.225 14:35:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:38.225 14:35:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 106766 00:30:38.225 killing process with pid 106766 00:30:38.225 Received shutdown signal, test time was about 8.183616 seconds 00:30:38.225 00:30:38.225 Latency(us) 00:30:38.225 [2024-11-06T14:35:05.860Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:38.225 [2024-11-06T14:35:05.860Z] =================================================================================================================== 00:30:38.225 [2024-11-06T14:35:05.860Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:38.225 14:35:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:30:38.225 14:35:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:30:38.225 14:35:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@970 -- # echo 'killing process with pid 106766' 00:30:38.225 14:35:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@971 -- # kill 106766 00:30:38.225 14:35:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@976 -- # wait 106766 00:30:38.792 14:35:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:39.053 14:35:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:30:39.053 14:35:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:30:39.053 14:35:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:39.053 14:35:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:30:39.053 14:35:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:39.053 14:35:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:30:39.053 14:35:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:39.053 14:35:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:39.053 rmmod nvme_tcp 00:30:39.053 rmmod nvme_fabrics 00:30:39.053 rmmod nvme_keyring 00:30:39.053 14:35:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:39.053 14:35:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:30:39.053 14:35:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:30:39.053 14:35:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@517 -- # '[' -n 106177 ']' 00:30:39.053 14:35:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # killprocess 106177 00:30:39.053 14:35:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # '[' -z 106177 ']' 00:30:39.053 14:35:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # kill -0 106177 00:30:39.053 14:35:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # uname 00:30:39.053 14:35:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:39.053 14:35:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 106177 00:30:39.316 killing process with pid 106177 00:30:39.316 14:35:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:30:39.316 14:35:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:30:39.316 14:35:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@970 -- # echo 'killing process with pid 106177' 00:30:39.316 14:35:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@971 -- # kill 106177 00:30:39.316 14:35:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@976 -- # wait 106177 00:30:39.883 14:35:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:39.883 14:35:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:39.883 14:35:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:39.883 14:35:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:30:39.883 14:35:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-save 00:30:39.883 14:35:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:30:39.883 14:35:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:39.883 14:35:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:39.883 14:35:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:30:39.883 14:35:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:30:39.883 14:35:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:30:39.883 14:35:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:30:39.883 14:35:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:30:39.883 14:35:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:30:39.883 14:35:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:30:39.883 14:35:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:30:39.883 14:35:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:30:39.883 14:35:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:30:39.883 14:35:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:30:39.883 14:35:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:30:39.883 14:35:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:39.884 14:35:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:40.144 14:35:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:30:40.144 14:35:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:40.144 14:35:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:40.144 14:35:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:40.144 14:35:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:30:40.144 00:30:40.144 real 0m47.446s 00:30:40.144 user 2m18.080s 00:30:40.144 sys 0m4.254s 00:30:40.144 14:35:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:40.144 14:35:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:30:40.144 ************************************ 00:30:40.144 END TEST nvmf_timeout 00:30:40.144 ************************************ 00:30:40.144 14:35:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:30:40.144 14:35:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:30:40.144 00:30:40.144 real 6m35.166s 00:30:40.144 user 17m53.110s 00:30:40.144 sys 0m56.245s 00:30:40.144 14:35:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:40.144 14:35:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:40.144 ************************************ 00:30:40.144 END TEST nvmf_host 00:30:40.144 ************************************ 00:30:40.144 14:35:07 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:30:40.144 14:35:07 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:30:40.144 14:35:07 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:30:40.144 14:35:07 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:30:40.144 14:35:07 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:40.144 14:35:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:40.144 ************************************ 00:30:40.144 START TEST nvmf_target_core_interrupt_mode 00:30:40.145 ************************************ 00:30:40.145 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:30:40.145 * Looking for test storage... 00:30:40.145 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:30:40.145 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:40.145 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lcov --version 00:30:40.145 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:40.405 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:40.405 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:40.405 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:40.405 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:40.405 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:30:40.405 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:30:40.405 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:30:40.405 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:30:40.405 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:30:40.405 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:30:40.405 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:30:40.405 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:40.405 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:30:40.405 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:30:40.405 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:40.405 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:40.405 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:30:40.405 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:30:40.405 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:40.405 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:30:40.405 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:30:40.405 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:30:40.405 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:30:40.405 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:40.405 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:30:40.405 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:30:40.405 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:40.405 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:40.405 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:30:40.405 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:40.405 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:40.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:40.405 --rc genhtml_branch_coverage=1 00:30:40.405 --rc genhtml_function_coverage=1 00:30:40.405 --rc genhtml_legend=1 00:30:40.405 --rc geninfo_all_blocks=1 00:30:40.405 --rc geninfo_unexecuted_blocks=1 00:30:40.405 00:30:40.405 ' 00:30:40.405 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:40.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:40.405 --rc genhtml_branch_coverage=1 00:30:40.405 --rc genhtml_function_coverage=1 00:30:40.405 --rc genhtml_legend=1 00:30:40.405 --rc geninfo_all_blocks=1 00:30:40.405 --rc geninfo_unexecuted_blocks=1 00:30:40.405 00:30:40.405 ' 00:30:40.405 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:40.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:40.405 --rc genhtml_branch_coverage=1 00:30:40.405 --rc genhtml_function_coverage=1 00:30:40.405 --rc genhtml_legend=1 00:30:40.405 --rc geninfo_all_blocks=1 00:30:40.405 --rc geninfo_unexecuted_blocks=1 00:30:40.405 00:30:40.405 ' 00:30:40.405 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:40.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:40.405 --rc genhtml_branch_coverage=1 00:30:40.405 --rc genhtml_function_coverage=1 00:30:40.405 --rc genhtml_legend=1 00:30:40.405 --rc geninfo_all_blocks=1 00:30:40.405 --rc geninfo_unexecuted_blocks=1 00:30:40.405 00:30:40.405 ' 00:30:40.405 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:30:40.405 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:30:40.405 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:40.405 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:30:40.405 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:40.405 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:40.405 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:40.405 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:40.405 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:40.405 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:40.405 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:40.405 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:40.405 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:40.405 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:40.405 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:30:40.405 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:30:40.405 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:40.405 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:40.405 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:40.405 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:40.405 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:40.405 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:30:40.405 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:40.405 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:40.405 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:40.405 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.405 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.405 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.405 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:30:40.405 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.405 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:30:40.405 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:40.405 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:40.406 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:40.406 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:40.406 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:40.406 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:40.406 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:40.406 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:40.406 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:40.406 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:40.406 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:30:40.406 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:30:40.406 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:30:40.406 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:30:40.406 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:30:40.406 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:40.406 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:40.406 ************************************ 00:30:40.406 START TEST nvmf_abort 00:30:40.406 ************************************ 00:30:40.406 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:30:40.406 * Looking for test storage... 00:30:40.406 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:30:40.406 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:40.406 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:30:40.406 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:40.406 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:40.406 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:40.406 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:40.406 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:40.406 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:30:40.406 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:30:40.406 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:30:40.406 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:30:40.406 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:30:40.406 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:30:40.406 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:30:40.406 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:40.406 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:30:40.406 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:30:40.406 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:40.406 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:40.406 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:30:40.406 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:30:40.406 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:40.406 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:30:40.406 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:30:40.406 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:30:40.406 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:30:40.406 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:40.406 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:30:40.406 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:30:40.406 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:40.406 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:40.406 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:30:40.406 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:40.406 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:40.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:40.406 --rc genhtml_branch_coverage=1 00:30:40.406 --rc genhtml_function_coverage=1 00:30:40.406 --rc genhtml_legend=1 00:30:40.406 --rc geninfo_all_blocks=1 00:30:40.406 --rc geninfo_unexecuted_blocks=1 00:30:40.406 00:30:40.406 ' 00:30:40.406 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:40.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:40.406 --rc genhtml_branch_coverage=1 00:30:40.406 --rc genhtml_function_coverage=1 00:30:40.406 --rc genhtml_legend=1 00:30:40.406 --rc geninfo_all_blocks=1 00:30:40.406 --rc geninfo_unexecuted_blocks=1 00:30:40.406 00:30:40.406 ' 00:30:40.406 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:40.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:40.406 --rc genhtml_branch_coverage=1 00:30:40.406 --rc genhtml_function_coverage=1 00:30:40.406 --rc genhtml_legend=1 00:30:40.406 --rc geninfo_all_blocks=1 00:30:40.406 --rc geninfo_unexecuted_blocks=1 00:30:40.406 00:30:40.406 ' 00:30:40.406 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:40.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:40.406 --rc genhtml_branch_coverage=1 00:30:40.406 --rc genhtml_function_coverage=1 00:30:40.406 --rc genhtml_legend=1 00:30:40.406 --rc geninfo_all_blocks=1 00:30:40.406 --rc geninfo_unexecuted_blocks=1 00:30:40.406 00:30:40.406 ' 00:30:40.406 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:40.406 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:30:40.406 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:40.406 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:40.406 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:40.406 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:40.406 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:40.406 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:40.406 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:40.406 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:40.406 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:40.406 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:40.406 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:30:40.406 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:30:40.406 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:40.406 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:40.406 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:40.406 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:40.406 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:40.406 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:30:40.406 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:40.406 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:40.406 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:40.406 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.406 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.407 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.407 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:30:40.407 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.407 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:30:40.407 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:40.407 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:40.407 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:40.407 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:40.407 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:40.407 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:40.407 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:40.407 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:40.407 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:40.407 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:40.407 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:40.407 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:30:40.407 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:30:40.407 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:40.407 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:40.407 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:40.407 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:40.407 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:40.407 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:40.407 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:40.407 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:40.407 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:30:40.407 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:30:40.407 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:30:40.407 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:30:40.407 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:30:40.407 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@460 -- # nvmf_veth_init 00:30:40.407 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:40.407 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:30:40.407 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:30:40.407 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:30:40.407 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:40.407 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:30:40.407 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:30:40.407 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:30:40.407 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:30:40.407 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:30:40.407 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:30:40.407 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:40.407 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:30:40.407 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:30:40.407 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:30:40.407 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:30:40.407 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:30:40.407 Cannot find device "nvmf_init_br" 00:30:40.407 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@162 -- # true 00:30:40.407 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:30:40.407 Cannot find device "nvmf_init_br2" 00:30:40.407 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@163 -- # true 00:30:40.407 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:30:40.668 Cannot find device "nvmf_tgt_br" 00:30:40.668 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@164 -- # true 00:30:40.668 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:30:40.668 Cannot find device "nvmf_tgt_br2" 00:30:40.668 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@165 -- # true 00:30:40.668 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:30:40.668 Cannot find device "nvmf_init_br" 00:30:40.668 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@166 -- # true 00:30:40.668 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:30:40.668 Cannot find device "nvmf_init_br2" 00:30:40.668 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@167 -- # true 00:30:40.668 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:30:40.668 Cannot find device "nvmf_tgt_br" 00:30:40.668 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@168 -- # true 00:30:40.668 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:30:40.668 Cannot find device "nvmf_tgt_br2" 00:30:40.668 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@169 -- # true 00:30:40.668 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:30:40.668 Cannot find device "nvmf_br" 00:30:40.668 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@170 -- # true 00:30:40.668 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:30:40.668 Cannot find device "nvmf_init_if" 00:30:40.668 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@171 -- # true 00:30:40.668 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:30:40.668 Cannot find device "nvmf_init_if2" 00:30:40.668 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@172 -- # true 00:30:40.668 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:40.668 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:40.668 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@173 -- # true 00:30:40.668 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:40.668 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:40.668 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@174 -- # true 00:30:40.668 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:30:40.668 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:30:40.668 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:30:40.668 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:30:40.669 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:30:40.669 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:30:40.669 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:30:40.669 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:30:40.669 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:30:40.669 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:30:40.669 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:30:40.669 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:30:40.669 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:30:40.669 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:30:40.669 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:30:40.669 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:30:40.669 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:30:40.669 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:30:40.669 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:30:40.669 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:30:40.669 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:30:40.669 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:30:40.669 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:30:40.669 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:30:40.669 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:30:40.930 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:30:40.931 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:30:40.931 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:30:40.931 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:30:40.931 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:30:40.931 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:30:40.931 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:30:40.931 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:30:40.931 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:30:40.931 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:30:40.931 00:30:40.931 --- 10.0.0.3 ping statistics --- 00:30:40.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:40.931 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:30:40.931 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:30:40.931 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:30:40.931 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.073 ms 00:30:40.931 00:30:40.931 --- 10.0.0.4 ping statistics --- 00:30:40.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:40.931 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:30:40.931 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:30:40.931 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:40.931 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:30:40.931 00:30:40.931 --- 10.0.0.1 ping statistics --- 00:30:40.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:40.931 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:30:40.931 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:30:40.931 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:40.931 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:30:40.931 00:30:40.931 --- 10.0.0.2 ping statistics --- 00:30:40.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:40.931 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:30:40.931 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:40.931 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@461 -- # return 0 00:30:40.931 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:40.931 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:40.931 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:40.931 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:40.931 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:40.931 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:40.931 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:40.931 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:30:40.931 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:40.931 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:40.931 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:40.931 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=107274 00:30:40.931 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:30:40.931 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 107274 00:30:40.931 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@833 -- # '[' -z 107274 ']' 00:30:40.931 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:40.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:40.931 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:40.931 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:40.931 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:40.931 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:40.931 [2024-11-06 14:35:08.456862] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:40.931 [2024-11-06 14:35:08.459128] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:30:40.931 [2024-11-06 14:35:08.459242] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:41.193 [2024-11-06 14:35:08.626612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:41.193 [2024-11-06 14:35:08.749757] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:41.193 [2024-11-06 14:35:08.749821] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:41.193 [2024-11-06 14:35:08.749832] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:41.193 [2024-11-06 14:35:08.749844] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:41.193 [2024-11-06 14:35:08.749851] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:41.193 [2024-11-06 14:35:08.751786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:41.193 [2024-11-06 14:35:08.752135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:41.193 [2024-11-06 14:35:08.752154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:41.454 [2024-11-06 14:35:09.010855] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:41.454 [2024-11-06 14:35:09.011222] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:41.454 [2024-11-06 14:35:09.011561] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:41.454 [2024-11-06 14:35:09.012232] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:41.716 14:35:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:41.716 14:35:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@866 -- # return 0 00:30:41.716 14:35:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:41.716 14:35:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:41.716 14:35:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:41.979 14:35:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:41.979 14:35:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:30:41.979 14:35:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.979 14:35:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:41.979 [2024-11-06 14:35:09.385213] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:41.979 14:35:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.979 14:35:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:30:41.979 14:35:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.979 14:35:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:41.979 Malloc0 00:30:41.979 14:35:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.979 14:35:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:41.979 14:35:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.979 14:35:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:41.979 Delay0 00:30:41.979 14:35:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.979 14:35:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:41.979 14:35:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.979 14:35:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:41.979 14:35:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.979 14:35:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:30:41.979 14:35:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.979 14:35:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:41.979 14:35:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.979 14:35:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:30:41.979 14:35:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.979 14:35:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:41.979 [2024-11-06 14:35:09.497479] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:30:41.979 14:35:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.979 14:35:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:30:41.979 14:35:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.979 14:35:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:41.979 14:35:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.979 14:35:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:30:42.240 [2024-11-06 14:35:09.712771] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:30:44.786 Initializing NVMe Controllers 00:30:44.786 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:30:44.786 controller IO queue size 128 less than required 00:30:44.786 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:30:44.786 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:30:44.786 Initialization complete. Launching workers. 00:30:44.786 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 30187 00:30:44.786 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 30244, failed to submit 66 00:30:44.786 success 30187, unsuccessful 57, failed 0 00:30:44.786 14:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:44.786 14:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.786 14:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:44.786 14:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.786 14:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:30:44.786 14:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:30:44.786 14:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:44.786 14:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:30:44.786 14:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:44.786 14:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:30:44.786 14:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:44.786 14:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:44.786 rmmod nvme_tcp 00:30:44.786 rmmod nvme_fabrics 00:30:44.786 rmmod nvme_keyring 00:30:44.786 14:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:44.786 14:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:30:44.786 14:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:30:44.786 14:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 107274 ']' 00:30:44.786 14:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 107274 00:30:44.786 14:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@952 -- # '[' -z 107274 ']' 00:30:44.786 14:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # kill -0 107274 00:30:44.786 14:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@957 -- # uname 00:30:44.786 14:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:44.786 14:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 107274 00:30:44.786 killing process with pid 107274 00:30:44.786 14:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:30:44.786 14:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:30:44.786 14:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 107274' 00:30:44.786 14:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@971 -- # kill 107274 00:30:44.786 14:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@976 -- # wait 107274 00:30:45.047 14:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:45.047 14:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:45.047 14:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:45.047 14:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:30:45.047 14:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:30:45.047 14:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:45.047 14:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:30:45.308 14:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:45.308 14:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:30:45.308 14:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:30:45.308 14:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:30:45.308 14:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:30:45.308 14:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:30:45.308 14:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:30:45.308 14:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:30:45.308 14:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:30:45.308 14:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:30:45.308 14:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:30:45.308 14:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:30:45.308 14:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:30:45.308 14:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:45.308 14:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:45.308 14:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@246 -- # remove_spdk_ns 00:30:45.308 14:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:45.308 14:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:45.308 14:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:45.308 14:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@300 -- # return 0 00:30:45.308 00:30:45.308 real 0m5.065s 00:30:45.308 user 0m10.321s 00:30:45.308 sys 0m1.381s 00:30:45.308 14:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:45.308 ************************************ 00:30:45.308 END TEST nvmf_abort 00:30:45.308 ************************************ 00:30:45.308 14:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:45.571 14:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:30:45.572 14:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:30:45.572 14:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:45.572 14:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:45.572 ************************************ 00:30:45.572 START TEST nvmf_ns_hotplug_stress 00:30:45.572 ************************************ 00:30:45.572 14:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:30:45.572 * Looking for test storage... 00:30:45.572 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:30:45.572 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:45.572 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:30:45.572 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:45.572 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:45.572 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:45.572 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:45.572 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:45.572 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:30:45.572 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:30:45.572 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:30:45.572 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:30:45.572 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:30:45.572 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:30:45.572 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:30:45.572 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:45.572 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:30:45.572 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:30:45.572 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:45.572 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:45.572 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:30:45.572 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:30:45.572 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:45.572 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:30:45.572 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:30:45.572 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:30:45.572 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:30:45.572 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:45.572 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:30:45.572 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:30:45.572 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:45.572 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:45.572 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:30:45.572 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:45.572 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:45.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.572 --rc genhtml_branch_coverage=1 00:30:45.572 --rc genhtml_function_coverage=1 00:30:45.572 --rc genhtml_legend=1 00:30:45.572 --rc geninfo_all_blocks=1 00:30:45.572 --rc geninfo_unexecuted_blocks=1 00:30:45.572 00:30:45.572 ' 00:30:45.572 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:45.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.572 --rc genhtml_branch_coverage=1 00:30:45.572 --rc genhtml_function_coverage=1 00:30:45.572 --rc genhtml_legend=1 00:30:45.572 --rc geninfo_all_blocks=1 00:30:45.572 --rc geninfo_unexecuted_blocks=1 00:30:45.572 00:30:45.572 ' 00:30:45.572 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:45.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.572 --rc genhtml_branch_coverage=1 00:30:45.572 --rc genhtml_function_coverage=1 00:30:45.572 --rc genhtml_legend=1 00:30:45.572 --rc geninfo_all_blocks=1 00:30:45.572 --rc geninfo_unexecuted_blocks=1 00:30:45.572 00:30:45.572 ' 00:30:45.572 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:45.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.572 --rc genhtml_branch_coverage=1 00:30:45.572 --rc genhtml_function_coverage=1 00:30:45.572 --rc genhtml_legend=1 00:30:45.572 --rc geninfo_all_blocks=1 00:30:45.572 --rc geninfo_unexecuted_blocks=1 00:30:45.572 00:30:45.572 ' 00:30:45.572 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:45.572 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:30:45.572 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:45.572 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:45.572 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:45.572 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:45.572 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:45.572 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:45.572 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:45.572 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:45.572 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:45.572 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:45.572 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:30:45.572 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:30:45.572 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:45.572 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:45.572 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:45.572 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:45.572 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:45.572 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:30:45.572 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:45.572 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:45.572 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:45.572 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.572 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.572 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.572 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:30:45.573 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.573 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:30:45.573 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:45.573 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:45.573 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:45.573 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:45.573 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:45.573 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:45.573 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:45.573 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:45.573 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:45.573 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:45.573 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:45.573 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:30:45.573 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:45.573 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:45.573 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:45.573 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:45.573 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:45.573 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:45.573 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:45.573 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:45.573 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:30:45.573 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:30:45.573 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:30:45.573 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:30:45.573 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:30:45.573 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@460 -- # nvmf_veth_init 00:30:45.573 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:45.573 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:30:45.573 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:30:45.573 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:30:45.573 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:45.573 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:30:45.573 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:30:45.573 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:30:45.573 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:30:45.573 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:30:45.573 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:30:45.573 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:45.573 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:30:45.573 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:30:45.573 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:30:45.573 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:30:45.573 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:30:45.573 Cannot find device "nvmf_init_br" 00:30:45.573 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true 00:30:45.573 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:30:45.573 Cannot find device "nvmf_init_br2" 00:30:45.573 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true 00:30:45.573 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:30:45.573 Cannot find device "nvmf_tgt_br" 00:30:45.573 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # true 00:30:45.573 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:30:45.573 Cannot find device "nvmf_tgt_br2" 00:30:45.573 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # true 00:30:45.573 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:30:45.834 Cannot find device "nvmf_init_br" 00:30:45.834 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # true 00:30:45.834 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:30:45.834 Cannot find device "nvmf_init_br2" 00:30:45.834 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # true 00:30:45.834 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:30:45.834 Cannot find device "nvmf_tgt_br" 00:30:45.834 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # true 00:30:45.834 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:30:45.834 Cannot find device "nvmf_tgt_br2" 00:30:45.834 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # true 00:30:45.834 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:30:45.834 Cannot find device "nvmf_br" 00:30:45.834 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # true 00:30:45.834 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:30:45.834 Cannot find device "nvmf_init_if" 00:30:45.834 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # true 00:30:45.834 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:30:45.834 Cannot find device "nvmf_init_if2" 00:30:45.834 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # true 00:30:45.834 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:45.834 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:45.834 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # true 00:30:45.834 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:45.834 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:45.834 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # true 00:30:45.834 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:30:45.834 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:30:45.834 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:30:45.834 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:30:45.834 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:30:45.834 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:30:45.834 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:30:45.834 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:30:45.834 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:30:45.834 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:30:45.834 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:30:45.834 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:30:45.834 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:30:45.834 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:30:45.834 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:30:45.834 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:30:45.834 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:30:45.834 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:30:45.834 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:30:45.834 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:30:45.834 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:30:45.834 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:30:45.834 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:30:45.834 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:30:46.096 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:30:46.096 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:30:46.096 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:30:46.096 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:30:46.096 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:30:46.096 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:30:46.096 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:30:46.096 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:30:46.096 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:30:46.096 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:30:46.096 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:30:46.096 00:30:46.096 --- 10.0.0.3 ping statistics --- 00:30:46.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:46.096 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:30:46.096 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:30:46.096 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:30:46.096 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:30:46.096 00:30:46.096 --- 10.0.0.4 ping statistics --- 00:30:46.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:46.096 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:30:46.096 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:30:46.096 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:46.096 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:30:46.096 00:30:46.096 --- 10.0.0.1 ping statistics --- 00:30:46.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:46.096 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:30:46.096 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:30:46.096 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:46.096 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:30:46.096 00:30:46.096 --- 10.0.0.2 ping statistics --- 00:30:46.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:46.096 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:30:46.096 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:46.096 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@461 -- # return 0 00:30:46.096 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:46.096 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:46.096 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:46.096 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:46.096 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:46.096 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:46.096 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:46.096 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:30:46.096 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:46.096 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:46.096 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:46.096 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=107596 00:30:46.096 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 107596 00:30:46.096 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # '[' -z 107596 ']' 00:30:46.096 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:46.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:46.096 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:46.096 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:46.096 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:46.096 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:46.096 14:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:30:46.096 [2024-11-06 14:35:13.695160] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:46.096 [2024-11-06 14:35:13.697435] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:30:46.096 [2024-11-06 14:35:13.697536] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:46.357 [2024-11-06 14:35:13.860005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:46.357 [2024-11-06 14:35:13.960133] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:46.357 [2024-11-06 14:35:13.960176] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:46.357 [2024-11-06 14:35:13.960186] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:46.357 [2024-11-06 14:35:13.960196] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:46.357 [2024-11-06 14:35:13.960203] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:46.357 [2024-11-06 14:35:13.961607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:46.357 [2024-11-06 14:35:13.961890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:46.357 [2024-11-06 14:35:13.961898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:46.618 [2024-11-06 14:35:14.185000] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:46.618 [2024-11-06 14:35:14.185284] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:46.618 [2024-11-06 14:35:14.185981] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:46.618 [2024-11-06 14:35:14.185994] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:47.191 14:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:47.191 14:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@866 -- # return 0 00:30:47.191 14:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:47.191 14:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:47.191 14:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:47.191 14:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:47.191 14:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:30:47.191 14:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:47.191 [2024-11-06 14:35:14.775348] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:47.191 14:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:47.452 14:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:30:47.714 [2024-11-06 14:35:15.204158] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:30:47.714 14:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:30:47.975 14:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:30:48.235 Malloc0 00:30:48.236 14:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:48.236 Delay0 00:30:48.496 14:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:48.496 14:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:30:48.756 NULL1 00:30:48.756 14:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:30:49.017 14:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:30:49.017 14:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=107722 00:30:49.017 14:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 107722 00:30:49.017 14:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:49.278 14:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:49.540 14:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:30:49.540 14:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:30:49.540 true 00:30:49.540 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 107722 00:30:49.540 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:49.801 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:50.062 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:30:50.062 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:30:50.323 true 00:30:50.323 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 107722 00:30:50.323 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:50.586 14:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:50.845 14:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:30:50.845 14:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:30:50.845 true 00:30:50.845 14:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 107722 00:30:50.845 14:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:51.103 14:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:51.364 14:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:30:51.364 14:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:30:51.621 true 00:30:51.621 14:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 107722 00:30:51.621 14:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:51.879 14:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:52.136 14:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:30:52.137 14:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:30:52.137 true 00:30:52.137 14:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 107722 00:30:52.137 14:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:52.394 14:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:52.652 14:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:30:52.652 14:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:30:52.910 true 00:30:52.910 14:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 107722 00:30:52.910 14:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:53.168 14:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:53.168 14:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:30:53.168 14:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:30:53.426 true 00:30:53.426 14:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 107722 00:30:53.426 14:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:53.684 14:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:53.943 14:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:30:53.943 14:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:30:54.200 true 00:30:54.200 14:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 107722 00:30:54.200 14:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:54.200 14:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:54.458 14:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:30:54.458 14:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:30:54.716 true 00:30:54.716 14:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 107722 00:30:54.716 14:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:54.974 14:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:55.232 14:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:30:55.232 14:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:30:55.232 true 00:30:55.491 14:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 107722 00:30:55.491 14:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:55.491 14:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:55.750 14:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:30:55.750 14:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:30:56.009 true 00:30:56.009 14:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 107722 00:30:56.009 14:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:56.267 14:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:56.525 14:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:30:56.525 14:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:30:56.525 true 00:30:56.525 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 107722 00:30:56.525 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:56.784 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:57.042 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:30:57.042 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:30:57.301 true 00:30:57.301 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 107722 00:30:57.301 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:57.559 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:57.817 14:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:30:57.817 14:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:30:57.817 true 00:30:57.817 14:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 107722 00:30:57.817 14:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:58.075 14:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:58.334 14:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:30:58.334 14:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:30:58.592 true 00:30:58.592 14:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 107722 00:30:58.592 14:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:58.851 14:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:58.851 14:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:30:58.851 14:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:30:59.109 true 00:30:59.109 14:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 107722 00:30:59.109 14:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:59.367 14:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:59.625 14:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:30:59.625 14:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:30:59.883 true 00:30:59.883 14:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 107722 00:30:59.883 14:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:59.883 14:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:00.142 14:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:31:00.142 14:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:31:00.401 true 00:31:00.401 14:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 107722 00:31:00.401 14:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:00.659 14:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:00.918 14:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:31:00.918 14:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:31:01.176 true 00:31:01.176 14:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 107722 00:31:01.176 14:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:01.176 14:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:01.435 14:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:31:01.435 14:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:31:01.693 true 00:31:01.693 14:35:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 107722 00:31:01.693 14:35:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:01.952 14:35:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:02.217 14:35:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:31:02.217 14:35:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:31:02.217 true 00:31:02.217 14:35:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 107722 00:31:02.217 14:35:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:02.477 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:02.739 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:31:02.739 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:31:03.011 true 00:31:03.011 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 107722 00:31:03.011 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:03.268 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:03.269 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:31:03.269 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:31:03.527 true 00:31:03.527 14:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 107722 00:31:03.527 14:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:03.786 14:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:04.044 14:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:31:04.044 14:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:31:04.303 true 00:31:04.303 14:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 107722 00:31:04.303 14:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:04.303 14:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:04.561 14:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:31:04.561 14:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:31:04.820 true 00:31:04.820 14:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 107722 00:31:04.820 14:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:05.079 14:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:05.337 14:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:31:05.337 14:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:31:05.337 true 00:31:05.338 14:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 107722 00:31:05.338 14:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:05.596 14:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:05.854 14:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:31:05.854 14:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:31:06.113 true 00:31:06.113 14:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 107722 00:31:06.113 14:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:06.371 14:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:06.371 14:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:31:06.371 14:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:31:06.629 true 00:31:06.629 14:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 107722 00:31:06.629 14:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:06.888 14:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:07.146 14:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:31:07.146 14:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:31:07.404 true 00:31:07.405 14:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 107722 00:31:07.405 14:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:07.663 14:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:07.663 14:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:31:07.663 14:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:31:07.922 true 00:31:07.922 14:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 107722 00:31:07.922 14:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:08.180 14:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:08.439 14:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:31:08.439 14:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:31:08.439 true 00:31:08.697 14:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 107722 00:31:08.697 14:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:08.697 14:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:08.955 14:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:31:08.955 14:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:31:09.214 true 00:31:09.214 14:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 107722 00:31:09.214 14:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:09.472 14:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:09.731 14:35:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:31:09.731 14:35:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:31:09.731 true 00:31:09.731 14:35:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 107722 00:31:09.731 14:35:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:09.989 14:35:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:10.247 14:35:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:31:10.247 14:35:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:31:10.505 true 00:31:10.505 14:35:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 107722 00:31:10.505 14:35:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:10.764 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:10.764 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:31:10.764 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:31:11.022 true 00:31:11.022 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 107722 00:31:11.022 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:11.281 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:11.539 14:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:31:11.540 14:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:31:11.798 true 00:31:11.798 14:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 107722 00:31:11.798 14:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:12.057 14:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:12.057 14:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:31:12.057 14:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:31:12.315 true 00:31:12.315 14:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 107722 00:31:12.315 14:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:12.573 14:35:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:12.831 14:35:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:31:12.831 14:35:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:31:13.089 true 00:31:13.089 14:35:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 107722 00:31:13.089 14:35:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:13.089 14:35:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:13.347 14:35:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:31:13.347 14:35:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:31:13.605 true 00:31:13.605 14:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 107722 00:31:13.605 14:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:13.862 14:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:14.155 14:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:31:14.155 14:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:31:14.155 true 00:31:14.155 14:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 107722 00:31:14.155 14:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:14.414 14:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:14.672 14:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:31:14.672 14:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:31:14.931 true 00:31:14.931 14:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 107722 00:31:14.931 14:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:15.189 14:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:15.189 14:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:31:15.189 14:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:31:15.447 true 00:31:15.447 14:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 107722 00:31:15.447 14:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:15.705 14:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:15.963 14:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:31:15.963 14:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:31:16.221 true 00:31:16.221 14:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 107722 00:31:16.221 14:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:16.479 14:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:16.479 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:31:16.479 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:31:16.738 true 00:31:16.738 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 107722 00:31:16.738 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:16.996 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:17.255 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:31:17.255 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:31:17.513 true 00:31:17.513 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 107722 00:31:17.513 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:17.513 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:17.772 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:31:17.772 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:31:18.030 true 00:31:18.030 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 107722 00:31:18.030 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:18.288 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:18.546 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:31:18.546 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:31:18.546 true 00:31:18.804 14:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 107722 00:31:18.804 14:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:18.804 14:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:19.063 14:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:31:19.063 14:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:31:19.322 Initializing NVMe Controllers 00:31:19.322 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:31:19.322 Controller IO queue size 128, less than required. 00:31:19.322 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:19.322 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:19.322 Initialization complete. Launching workers. 00:31:19.322 ======================================================== 00:31:19.322 Latency(us) 00:31:19.322 Device Information : IOPS MiB/s Average min max 00:31:19.322 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 25004.90 12.21 5119.16 2875.45 45252.53 00:31:19.322 ======================================================== 00:31:19.322 Total : 25004.90 12.21 5119.16 2875.45 45252.53 00:31:19.322 00:31:19.322 true 00:31:19.322 14:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 107722 00:31:19.322 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (107722) - No such process 00:31:19.322 14:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 107722 00:31:19.322 14:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:19.582 14:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:19.841 14:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:31:19.841 14:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:31:19.841 14:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:31:19.841 14:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:19.841 14:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:31:19.841 null0 00:31:19.841 14:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:19.841 14:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:19.841 14:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:31:20.100 null1 00:31:20.100 14:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:20.100 14:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:20.100 14:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:31:20.358 null2 00:31:20.358 14:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:20.358 14:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:20.358 14:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:31:20.358 null3 00:31:20.358 14:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:20.358 14:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:20.358 14:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:31:20.617 null4 00:31:20.617 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:20.617 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:20.617 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:31:20.878 null5 00:31:20.878 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:20.878 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:20.878 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:31:20.878 null6 00:31:20.878 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:20.878 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:20.878 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:31:21.138 null7 00:31:21.138 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:21.138 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:21.138 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:31:21.138 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:21.139 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:21.139 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:21.139 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:21.139 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:31:21.139 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:31:21.139 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:21.139 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:21.139 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:21.139 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:31:21.139 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:21.139 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:21.139 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:31:21.139 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:21.139 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:21.139 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:21.139 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:21.139 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:21.139 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:21.139 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:21.139 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:31:21.139 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:31:21.139 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:21.139 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:21.139 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:21.139 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:21.139 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:21.139 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:21.139 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:31:21.139 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:31:21.139 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:21.139 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:21.139 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:21.139 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:21.139 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:31:21.139 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:21.139 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:21.139 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:21.139 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:21.139 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:21.139 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:31:21.139 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:21.139 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:21.139 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:21.139 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:21.139 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:21.139 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:21.139 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:31:21.139 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:21.139 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:31:21.139 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:21.139 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:21.139 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:21.139 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:21.139 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 109135 109137 109138 109140 109143 109144 109145 109148 00:31:21.139 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:21.139 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:31:21.139 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:31:21.139 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:21.139 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:21.139 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:21.139 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:31:21.139 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:31:21.139 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:21.139 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:21.139 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:21.398 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:21.398 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:21.398 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:21.398 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:21.398 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:21.398 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:21.398 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:21.398 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:21.398 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:21.398 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:21.398 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:21.656 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:21.656 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:21.656 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:21.656 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:21.656 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:21.656 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:21.656 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:21.656 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:21.656 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:21.656 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:21.656 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:21.656 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:21.656 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:21.656 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:21.656 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:21.656 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:21.656 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:21.656 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:21.656 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:21.656 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:21.656 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:21.656 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:21.656 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:21.656 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:21.915 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:21.915 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:21.915 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:21.915 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:21.915 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:21.915 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:21.915 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:21.915 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:21.915 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:21.915 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:21.915 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:21.915 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:21.915 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:21.915 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:21.915 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:21.915 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:21.915 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:22.181 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:22.181 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:22.181 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:22.181 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:22.181 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:22.181 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:22.181 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:22.181 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:22.181 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:22.181 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:22.181 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:22.181 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:22.181 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:22.181 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:22.181 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:22.181 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:22.181 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:22.181 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:22.443 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:22.443 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:22.443 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:22.443 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:22.443 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:22.443 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:22.443 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:22.443 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:22.443 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:22.443 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:22.443 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:22.443 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:22.443 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:22.443 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:22.443 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:22.443 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:22.443 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:22.443 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:22.443 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:22.443 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:22.443 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:22.443 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:22.443 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:22.443 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:22.443 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:22.702 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:22.702 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:22.702 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:22.702 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:22.702 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:22.702 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:22.702 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:22.702 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:22.702 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:22.702 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:22.702 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:22.702 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:22.702 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:22.702 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:22.702 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:22.702 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:22.702 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:22.702 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:22.702 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:22.702 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:22.961 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:22.961 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:22.961 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:22.961 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:22.961 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:22.961 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:22.961 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:22.961 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:22.961 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:22.961 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:22.961 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:22.961 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:22.961 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:22.961 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:22.961 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:23.220 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:23.220 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:23.220 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:23.220 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:23.220 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:23.220 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:23.220 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:23.220 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:23.220 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:23.220 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:23.220 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:23.220 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:23.220 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:23.220 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:23.220 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:23.220 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:23.220 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:23.220 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:23.220 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:23.220 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:23.479 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:23.479 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:23.479 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:23.479 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:23.479 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:23.479 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:23.479 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:23.479 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:23.479 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:23.479 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:23.479 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:23.479 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:23.479 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:23.479 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:23.479 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:23.479 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:23.479 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:23.479 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:23.479 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:23.738 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:23.738 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:23.738 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:23.738 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:23.738 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:23.738 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:23.738 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:23.738 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:23.738 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:23.738 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:23.738 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:23.738 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:23.738 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:23.738 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:23.738 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:23.738 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:23.738 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:23.738 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:23.997 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:23.997 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:23.997 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:23.997 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:23.997 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:23.997 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:23.997 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:23.997 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:23.997 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:23.997 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:23.997 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:23.997 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:23.997 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:23.997 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:23.997 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:23.997 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:23.997 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:23.997 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:23.997 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:23.997 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:23.997 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:24.256 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:24.256 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:24.256 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:24.256 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:24.256 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:24.256 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:24.256 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:24.256 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:24.256 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:24.256 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:24.256 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:24.256 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:24.256 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:24.256 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:24.256 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:24.515 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:24.515 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:24.515 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:24.515 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:24.515 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:24.515 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:24.515 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:24.515 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:24.515 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:24.515 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:24.515 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:24.515 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:24.515 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:24.515 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:24.515 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:24.515 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:24.515 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:24.515 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:24.515 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:24.515 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:24.774 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:24.774 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:24.774 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:24.774 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:24.774 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:24.774 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:24.774 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:24.774 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:24.774 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:24.774 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:24.774 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:24.774 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:24.774 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:24.774 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:24.774 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:25.033 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:25.033 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:25.033 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:25.033 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:25.033 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:25.033 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:25.033 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:25.033 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:25.033 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:25.033 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:25.033 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:25.033 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:25.033 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:25.033 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:25.033 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:25.033 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:25.033 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:25.033 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:25.033 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:25.033 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:25.033 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:25.033 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:25.033 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:25.311 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:25.311 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:25.311 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:25.311 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:25.311 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:25.311 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:25.311 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:25.311 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:25.311 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:25.311 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:25.311 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:25.311 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:25.311 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:25.311 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:25.311 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:25.311 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:25.609 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:25.609 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:25.609 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:25.609 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:25.609 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:25.609 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:25.610 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:25.610 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:25.610 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:25.610 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:25.610 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:25.871 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:25.871 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:25.871 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:25.871 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:25.871 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:31:25.871 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:31:25.871 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:25.871 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:31:25.871 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:25.871 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:31:25.871 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:25.871 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:25.871 rmmod nvme_tcp 00:31:25.871 rmmod nvme_fabrics 00:31:25.871 rmmod nvme_keyring 00:31:25.871 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:25.871 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:31:25.871 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:31:25.871 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 107596 ']' 00:31:25.871 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 107596 00:31:25.871 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' -z 107596 ']' 00:31:25.871 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # kill -0 107596 00:31:25.871 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # uname 00:31:25.871 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:25.871 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 107596 00:31:25.871 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:31:25.871 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:31:25.871 killing process with pid 107596 00:31:25.871 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 107596' 00:31:25.871 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # kill 107596 00:31:25.871 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@976 -- # wait 107596 00:31:26.815 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:26.815 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:26.815 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:26.815 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:31:26.815 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:31:26.815 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:31:26.815 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:26.815 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:26.815 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:31:26.815 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:31:26.815 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:31:26.815 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:31:26.815 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:31:26.815 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:31:26.815 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:31:26.815 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:31:26.815 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:31:26.815 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:31:26.815 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:31:26.815 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:31:26.815 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:26.815 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:26.815 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@246 -- # remove_spdk_ns 00:31:26.815 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:26.816 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:26.816 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:26.816 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@300 -- # return 0 00:31:26.816 00:31:26.816 real 0m41.383s 00:31:26.816 user 3m4.563s 00:31:26.816 sys 0m14.535s 00:31:26.816 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:26.816 ************************************ 00:31:26.816 END TEST nvmf_ns_hotplug_stress 00:31:26.816 ************************************ 00:31:26.816 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:26.816 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:31:26.816 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:31:26.816 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:26.816 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:26.816 ************************************ 00:31:26.816 START TEST nvmf_delete_subsystem 00:31:26.816 ************************************ 00:31:26.816 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:31:27.078 * Looking for test storage... 00:31:27.078 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:31:27.078 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:27.078 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:31:27.078 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:27.078 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:27.078 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:27.078 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:27.078 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:27.078 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:31:27.078 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:31:27.078 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:31:27.078 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:31:27.078 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:31:27.078 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:31:27.078 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:31:27.078 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:27.078 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:31:27.078 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:31:27.078 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:27.078 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:27.078 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:31:27.078 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:31:27.078 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:27.078 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:31:27.078 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:31:27.078 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:31:27.078 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:31:27.078 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:27.078 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:31:27.078 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:31:27.078 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:27.078 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:27.078 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:31:27.078 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:27.078 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:27.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:27.078 --rc genhtml_branch_coverage=1 00:31:27.078 --rc genhtml_function_coverage=1 00:31:27.078 --rc genhtml_legend=1 00:31:27.078 --rc geninfo_all_blocks=1 00:31:27.078 --rc geninfo_unexecuted_blocks=1 00:31:27.078 00:31:27.078 ' 00:31:27.079 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:27.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:27.079 --rc genhtml_branch_coverage=1 00:31:27.079 --rc genhtml_function_coverage=1 00:31:27.079 --rc genhtml_legend=1 00:31:27.079 --rc geninfo_all_blocks=1 00:31:27.079 --rc geninfo_unexecuted_blocks=1 00:31:27.079 00:31:27.079 ' 00:31:27.079 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:27.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:27.079 --rc genhtml_branch_coverage=1 00:31:27.079 --rc genhtml_function_coverage=1 00:31:27.079 --rc genhtml_legend=1 00:31:27.079 --rc geninfo_all_blocks=1 00:31:27.079 --rc geninfo_unexecuted_blocks=1 00:31:27.079 00:31:27.079 ' 00:31:27.079 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:27.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:27.079 --rc genhtml_branch_coverage=1 00:31:27.079 --rc genhtml_function_coverage=1 00:31:27.079 --rc genhtml_legend=1 00:31:27.079 --rc geninfo_all_blocks=1 00:31:27.079 --rc geninfo_unexecuted_blocks=1 00:31:27.079 00:31:27.079 ' 00:31:27.079 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:27.079 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:31:27.079 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:27.079 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:27.079 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:27.079 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:27.079 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:27.079 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:27.079 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:27.079 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:27.079 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:27.079 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:27.079 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:31:27.079 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:31:27.079 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:27.079 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:27.079 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:27.079 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:27.079 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:27.079 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:31:27.079 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:27.079 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:27.079 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:27.079 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.079 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.079 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.079 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:31:27.079 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.079 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:31:27.079 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:27.079 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:27.079 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:27.079 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:27.079 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:27.079 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:27.079 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:27.079 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:27.079 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:27.079 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:27.079 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:31:27.079 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:27.079 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:27.079 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:27.079 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:27.079 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:27.079 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:27.079 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:27.079 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:27.079 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:31:27.079 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:31:27.079 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:31:27.079 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:31:27.079 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:31:27.079 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@460 -- # nvmf_veth_init 00:31:27.079 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:27.079 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:31:27.079 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:31:27.079 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:31:27.079 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:27.079 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:31:27.079 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:31:27.079 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:31:27.079 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:31:27.079 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:31:27.079 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:31:27.079 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:27.079 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:31:27.080 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:31:27.080 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:31:27.080 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:31:27.080 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:31:27.080 Cannot find device "nvmf_init_br" 00:31:27.080 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true 00:31:27.080 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:31:27.080 Cannot find device "nvmf_init_br2" 00:31:27.080 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true 00:31:27.080 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:31:27.080 Cannot find device "nvmf_tgt_br" 00:31:27.080 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # true 00:31:27.080 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:31:27.080 Cannot find device "nvmf_tgt_br2" 00:31:27.080 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # true 00:31:27.080 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:31:27.080 Cannot find device "nvmf_init_br" 00:31:27.080 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # true 00:31:27.080 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:31:27.080 Cannot find device "nvmf_init_br2" 00:31:27.080 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # true 00:31:27.080 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:31:27.080 Cannot find device "nvmf_tgt_br" 00:31:27.080 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # true 00:31:27.080 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:31:27.080 Cannot find device "nvmf_tgt_br2" 00:31:27.080 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # true 00:31:27.080 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:31:27.080 Cannot find device "nvmf_br" 00:31:27.080 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # true 00:31:27.080 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:31:27.080 Cannot find device "nvmf_init_if" 00:31:27.080 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # true 00:31:27.080 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:31:27.080 Cannot find device "nvmf_init_if2" 00:31:27.080 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # true 00:31:27.080 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:27.080 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:27.080 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # true 00:31:27.080 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:27.342 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:27.342 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # true 00:31:27.342 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:31:27.342 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:31:27.342 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:31:27.342 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:31:27.342 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:31:27.342 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:31:27.342 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:31:27.342 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:31:27.342 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:31:27.342 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:31:27.342 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:31:27.342 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:31:27.342 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:31:27.342 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:31:27.342 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:31:27.342 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:31:27.342 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:31:27.342 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:31:27.342 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:31:27.342 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:31:27.342 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:31:27.342 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:31:27.342 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:31:27.342 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:31:27.342 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:31:27.342 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:31:27.342 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:31:27.342 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:31:27.342 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:31:27.342 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:31:27.343 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:31:27.343 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:31:27.343 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:31:27.343 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:31:27.343 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:31:27.343 00:31:27.343 --- 10.0.0.3 ping statistics --- 00:31:27.343 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:27.343 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:31:27.343 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:31:27.343 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:31:27.343 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:31:27.343 00:31:27.343 --- 10.0.0.4 ping statistics --- 00:31:27.343 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:27.343 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:31:27.343 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:31:27.343 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:27.343 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:31:27.343 00:31:27.343 --- 10.0.0.1 ping statistics --- 00:31:27.343 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:27.343 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:31:27.343 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:31:27.343 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:27.343 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:31:27.343 00:31:27.343 --- 10.0.0.2 ping statistics --- 00:31:27.343 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:27.343 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:31:27.343 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:27.343 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@461 -- # return 0 00:31:27.343 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:27.343 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:27.343 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:27.343 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:27.343 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:27.343 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:27.343 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:27.343 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:31:27.343 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:27.343 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:27.343 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:27.343 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=110522 00:31:27.343 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 110522 00:31:27.343 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # '[' -z 110522 ']' 00:31:27.343 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:27.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:27.343 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:27.343 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:27.343 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:31:27.343 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:27.343 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:27.602 [2024-11-06 14:35:55.025857] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:27.602 [2024-11-06 14:35:55.028079] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:31:27.603 [2024-11-06 14:35:55.028194] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:27.603 [2024-11-06 14:35:55.190920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:27.862 [2024-11-06 14:35:55.280976] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:27.862 [2024-11-06 14:35:55.281023] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:27.862 [2024-11-06 14:35:55.281035] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:27.862 [2024-11-06 14:35:55.281042] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:27.862 [2024-11-06 14:35:55.281048] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:27.862 [2024-11-06 14:35:55.282348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:27.862 [2024-11-06 14:35:55.282426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:27.862 [2024-11-06 14:35:55.457172] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:27.862 [2024-11-06 14:35:55.457740] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:27.862 [2024-11-06 14:35:55.457879] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:28.430 14:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:28.430 14:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@866 -- # return 0 00:31:28.430 14:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:28.430 14:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:28.430 14:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:28.430 14:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:28.430 14:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:28.430 14:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.430 14:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:28.430 [2024-11-06 14:35:55.915297] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:28.430 14:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.430 14:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:28.430 14:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.430 14:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:28.430 14:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.430 14:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:31:28.430 14:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.430 14:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:28.430 [2024-11-06 14:35:55.943764] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:31:28.430 14:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.430 14:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:31:28.430 14:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.430 14:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:28.430 NULL1 00:31:28.430 14:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.430 14:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:28.430 14:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.430 14:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:28.430 Delay0 00:31:28.430 14:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.430 14:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:28.430 14:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.430 14:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:28.430 14:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.430 14:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=110573 00:31:28.430 14:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:31:28.431 14:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:31:28.689 [2024-11-06 14:35:56.165541] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:31:30.590 14:35:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:30.590 14:35:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.590 14:35:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Write completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 starting I/O failed: -6 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Write completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 starting I/O failed: -6 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Write completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Write completed with error (sct=0, sc=8) 00:31:30.590 starting I/O failed: -6 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 starting I/O failed: -6 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Write completed with error (sct=0, sc=8) 00:31:30.590 Write completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 starting I/O failed: -6 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Write completed with error (sct=0, sc=8) 00:31:30.590 Write completed with error (sct=0, sc=8) 00:31:30.590 Write completed with error (sct=0, sc=8) 00:31:30.590 starting I/O failed: -6 00:31:30.590 Write completed with error (sct=0, sc=8) 00:31:30.590 Write completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Write completed with error (sct=0, sc=8) 00:31:30.590 starting I/O failed: -6 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Write completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 starting I/O failed: -6 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Write completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Write completed with error (sct=0, sc=8) 00:31:30.590 starting I/O failed: -6 00:31:30.590 Write completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 starting I/O failed: -6 00:31:30.590 Write completed with error (sct=0, sc=8) 00:31:30.590 Write completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Write completed with error (sct=0, sc=8) 00:31:30.590 starting I/O failed: -6 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 starting I/O failed: -6 00:31:30.590 [2024-11-06 14:35:58.196941] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020100 is same with the state(6) to be set 00:31:30.590 Write completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Write completed with error (sct=0, sc=8) 00:31:30.590 Write completed with error (sct=0, sc=8) 00:31:30.590 Write completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Write completed with error (sct=0, sc=8) 00:31:30.590 Write completed with error (sct=0, sc=8) 00:31:30.590 Write completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Write completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Write completed with error (sct=0, sc=8) 00:31:30.590 Write completed with error (sct=0, sc=8) 00:31:30.590 Write completed with error (sct=0, sc=8) 00:31:30.590 Write completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Write completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Write completed with error (sct=0, sc=8) 00:31:30.590 Write completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Write completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Write completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Write completed with error (sct=0, sc=8) 00:31:30.590 Write completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Write completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Write completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Write completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Write completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Write completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 starting I/O failed: -6 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 starting I/O failed: -6 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Write completed with error (sct=0, sc=8) 00:31:30.590 Write completed with error (sct=0, sc=8) 00:31:30.590 starting I/O failed: -6 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 starting I/O failed: -6 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 starting I/O failed: -6 00:31:30.590 Write completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 starting I/O failed: -6 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 starting I/O failed: -6 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Write completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 starting I/O failed: -6 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 starting I/O failed: -6 00:31:30.590 Write completed with error (sct=0, sc=8) 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.590 Write completed with error (sct=0, sc=8) 00:31:30.590 Write completed with error (sct=0, sc=8) 00:31:30.590 starting I/O failed: -6 00:31:30.590 Read completed with error (sct=0, sc=8) 00:31:30.591 [2024-11-06 14:35:58.198405] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000010480 is same with the state(6) to be set 00:31:30.591 Write completed with error (sct=0, sc=8) 00:31:30.591 Write completed with error (sct=0, sc=8) 00:31:30.591 Read completed with error (sct=0, sc=8) 00:31:30.591 Read completed with error (sct=0, sc=8) 00:31:30.591 Write completed with error (sct=0, sc=8) 00:31:30.591 Read completed with error (sct=0, sc=8) 00:31:30.591 Read completed with error (sct=0, sc=8) 00:31:30.591 Read completed with error (sct=0, sc=8) 00:31:30.591 Read completed with error (sct=0, sc=8) 00:31:30.591 Read completed with error (sct=0, sc=8) 00:31:30.591 Read completed with error (sct=0, sc=8) 00:31:30.591 Read completed with error (sct=0, sc=8) 00:31:30.591 Write completed with error (sct=0, sc=8) 00:31:30.591 Write completed with error (sct=0, sc=8) 00:31:30.591 Read completed with error (sct=0, sc=8) 00:31:30.591 Read completed with error (sct=0, sc=8) 00:31:30.591 Read completed with error (sct=0, sc=8) 00:31:30.591 Write completed with error (sct=0, sc=8) 00:31:30.591 Read completed with error (sct=0, sc=8) 00:31:30.591 Read completed with error (sct=0, sc=8) 00:31:30.591 Read completed with error (sct=0, sc=8) 00:31:30.591 Read completed with error (sct=0, sc=8) 00:31:30.591 Write completed with error (sct=0, sc=8) 00:31:30.591 Read completed with error (sct=0, sc=8) 00:31:30.591 Read completed with error (sct=0, sc=8) 00:31:30.591 Read completed with error (sct=0, sc=8) 00:31:30.591 Read completed with error (sct=0, sc=8) 00:31:30.591 Read completed with error (sct=0, sc=8) 00:31:30.591 Read completed with error (sct=0, sc=8) 00:31:30.591 Read completed with error (sct=0, sc=8) 00:31:30.591 Read completed with error (sct=0, sc=8) 00:31:30.591 Read completed with error (sct=0, sc=8) 00:31:30.591 Write completed with error (sct=0, sc=8) 00:31:30.591 Write completed with error (sct=0, sc=8) 00:31:30.591 Write completed with error (sct=0, sc=8) 00:31:30.591 Read completed with error (sct=0, sc=8) 00:31:30.591 Read completed with error (sct=0, sc=8) 00:31:30.591 Read completed with error (sct=0, sc=8) 00:31:30.591 Read completed with error (sct=0, sc=8) 00:31:30.591 Write completed with error (sct=0, sc=8) 00:31:30.591 Read completed with error (sct=0, sc=8) 00:31:30.591 Write completed with error (sct=0, sc=8) 00:31:30.591 Read completed with error (sct=0, sc=8) 00:31:30.591 Read completed with error (sct=0, sc=8) 00:31:30.591 Write completed with error (sct=0, sc=8) 00:31:30.591 Read completed with error (sct=0, sc=8) 00:31:30.591 Write completed with error (sct=0, sc=8) 00:31:30.591 Read completed with error (sct=0, sc=8) 00:31:30.591 Read completed with error (sct=0, sc=8) 00:31:30.591 Read completed with error (sct=0, sc=8) 00:31:30.591 [2024-11-06 14:35:58.198873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500000ff80 is same with the state(6) to be set 00:31:31.967 [2024-11-06 14:35:59.179863] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500000f300 is same with the state(6) to be set 00:31:31.967 Read completed with error (sct=0, sc=8) 00:31:31.967 Write completed with error (sct=0, sc=8) 00:31:31.967 Read completed with error (sct=0, sc=8) 00:31:31.967 Read completed with error (sct=0, sc=8) 00:31:31.967 Read completed with error (sct=0, sc=8) 00:31:31.967 Read completed with error (sct=0, sc=8) 00:31:31.967 Read completed with error (sct=0, sc=8) 00:31:31.967 Read completed with error (sct=0, sc=8) 00:31:31.967 Read completed with error (sct=0, sc=8) 00:31:31.967 Write completed with error (sct=0, sc=8) 00:31:31.967 Read completed with error (sct=0, sc=8) 00:31:31.967 Read completed with error (sct=0, sc=8) 00:31:31.967 Read completed with error (sct=0, sc=8) 00:31:31.967 Read completed with error (sct=0, sc=8) 00:31:31.967 Write completed with error (sct=0, sc=8) 00:31:31.967 Write completed with error (sct=0, sc=8) 00:31:31.967 Read completed with error (sct=0, sc=8) 00:31:31.967 Read completed with error (sct=0, sc=8) 00:31:31.967 Read completed with error (sct=0, sc=8) 00:31:31.967 Read completed with error (sct=0, sc=8) 00:31:31.967 Write completed with error (sct=0, sc=8) 00:31:31.967 Read completed with error (sct=0, sc=8) 00:31:31.967 Read completed with error (sct=0, sc=8) 00:31:31.967 Write completed with error (sct=0, sc=8) 00:31:31.967 Write completed with error (sct=0, sc=8) 00:31:31.967 Read completed with error (sct=0, sc=8) 00:31:31.967 Read completed with error (sct=0, sc=8) 00:31:31.967 Read completed with error (sct=0, sc=8) 00:31:31.967 [2024-11-06 14:35:59.198837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500000fa80 is same with the state(6) to be set 00:31:31.967 Read completed with error (sct=0, sc=8) 00:31:31.967 Read completed with error (sct=0, sc=8) 00:31:31.967 Read completed with error (sct=0, sc=8) 00:31:31.967 Write completed with error (sct=0, sc=8) 00:31:31.967 Read completed with error (sct=0, sc=8) 00:31:31.967 Read completed with error (sct=0, sc=8) 00:31:31.967 Write completed with error (sct=0, sc=8) 00:31:31.967 Read completed with error (sct=0, sc=8) 00:31:31.967 Read completed with error (sct=0, sc=8) 00:31:31.967 Read completed with error (sct=0, sc=8) 00:31:31.967 Read completed with error (sct=0, sc=8) 00:31:31.967 Write completed with error (sct=0, sc=8) 00:31:31.967 Read completed with error (sct=0, sc=8) 00:31:31.967 Read completed with error (sct=0, sc=8) 00:31:31.967 Read completed with error (sct=0, sc=8) 00:31:31.967 Write completed with error (sct=0, sc=8) 00:31:31.967 Read completed with error (sct=0, sc=8) 00:31:31.967 Write completed with error (sct=0, sc=8) 00:31:31.967 [2024-11-06 14:35:59.200133] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000010700 is same with the state(6) to be set 00:31:31.967 Write completed with error (sct=0, sc=8) 00:31:31.967 Write completed with error (sct=0, sc=8) 00:31:31.967 Write completed with error (sct=0, sc=8) 00:31:31.967 Read completed with error (sct=0, sc=8) 00:31:31.967 Read completed with error (sct=0, sc=8) 00:31:31.967 Write completed with error (sct=0, sc=8) 00:31:31.967 Read completed with error (sct=0, sc=8) 00:31:31.967 Write completed with error (sct=0, sc=8) 00:31:31.967 Read completed with error (sct=0, sc=8) 00:31:31.967 Read completed with error (sct=0, sc=8) 00:31:31.967 Read completed with error (sct=0, sc=8) 00:31:31.967 Write completed with error (sct=0, sc=8) 00:31:31.967 Read completed with error (sct=0, sc=8) 00:31:31.967 Write completed with error (sct=0, sc=8) 00:31:31.967 Read completed with error (sct=0, sc=8) 00:31:31.967 Read completed with error (sct=0, sc=8) 00:31:31.967 Read completed with error (sct=0, sc=8) 00:31:31.967 Read completed with error (sct=0, sc=8) 00:31:31.967 Read completed with error (sct=0, sc=8) 00:31:31.967 Read completed with error (sct=0, sc=8) 00:31:31.967 Read completed with error (sct=0, sc=8) 00:31:31.967 Write completed with error (sct=0, sc=8) 00:31:31.967 Read completed with error (sct=0, sc=8) 00:31:31.967 Read completed with error (sct=0, sc=8) 00:31:31.967 Read completed with error (sct=0, sc=8) 00:31:31.967 Read completed with error (sct=0, sc=8) 00:31:31.967 Write completed with error (sct=0, sc=8) 00:31:31.967 Write completed with error (sct=0, sc=8) 00:31:31.967 [2024-11-06 14:35:59.200825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001fe80 is same with the state(6) to be set 00:31:31.967 Read completed with error (sct=0, sc=8) 00:31:31.967 Read completed with error (sct=0, sc=8) 00:31:31.967 Read completed with error (sct=0, sc=8) 00:31:31.967 Read completed with error (sct=0, sc=8) 00:31:31.967 Read completed with error (sct=0, sc=8) 00:31:31.967 Read completed with error (sct=0, sc=8) 00:31:31.967 Read completed with error (sct=0, sc=8) 00:31:31.967 Write completed with error (sct=0, sc=8) 00:31:31.967 Read completed with error (sct=0, sc=8) 00:31:31.967 Read completed with error (sct=0, sc=8) 00:31:31.967 Read completed with error (sct=0, sc=8) 00:31:31.967 Read completed with error (sct=0, sc=8) 00:31:31.968 Write completed with error (sct=0, sc=8) 00:31:31.968 Read completed with error (sct=0, sc=8) 00:31:31.968 Read completed with error (sct=0, sc=8) 00:31:31.968 Read completed with error (sct=0, sc=8) 00:31:31.968 Read completed with error (sct=0, sc=8) 00:31:31.968 Read completed with error (sct=0, sc=8) 00:31:31.968 [2024-11-06 14:35:59.201109] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000010200 is same with the state(6) to be set 00:31:31.968 14:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.968 Initializing NVMe Controllers 00:31:31.968 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:31:31.968 Controller IO queue size 128, less than required. 00:31:31.968 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:31.968 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:31:31.968 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:31:31.968 Initialization complete. Launching workers. 00:31:31.968 ======================================================== 00:31:31.968 Latency(us) 00:31:31.968 Device Information : IOPS MiB/s Average min max 00:31:31.968 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 177.16 0.09 880332.97 957.63 1009890.53 00:31:31.968 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 158.25 0.08 922718.08 493.21 1009305.20 00:31:31.968 ======================================================== 00:31:31.968 Total : 335.42 0.16 900330.69 493.21 1009890.53 00:31:31.968 00:31:31.968 14:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:31:31.968 14:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 110573 00:31:31.968 [2024-11-06 14:35:59.204789] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500000f300 (9): Bad file descriptor 00:31:31.968 14:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:31:31.968 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:31:32.228 14:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:31:32.228 14:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 110573 00:31:32.228 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (110573) - No such process 00:31:32.228 14:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 110573 00:31:32.228 14:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:31:32.229 14:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 110573 00:31:32.229 14:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:31:32.229 14:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:32.229 14:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:31:32.229 14:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:32.229 14:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 110573 00:31:32.229 14:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:31:32.229 14:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:32.229 14:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:32.229 14:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:32.229 14:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:32.229 14:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.229 14:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:32.229 14:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.229 14:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:31:32.229 14:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.229 14:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:32.229 [2024-11-06 14:35:59.727671] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:31:32.229 14:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.229 14:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:32.229 14:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.229 14:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:32.229 14:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.229 14:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=110616 00:31:32.229 14:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:31:32.229 14:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 110616 00:31:32.229 14:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:32.229 14:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:31:32.487 [2024-11-06 14:35:59.919117] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:31:32.746 14:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:32.746 14:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 110616 00:31:32.746 14:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:33.311 14:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:33.311 14:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 110616 00:31:33.311 14:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:33.877 14:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:33.877 14:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 110616 00:31:33.877 14:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:34.135 14:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:34.135 14:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 110616 00:31:34.135 14:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:34.702 14:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:34.702 14:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 110616 00:31:34.702 14:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:35.268 14:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:35.268 14:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 110616 00:31:35.268 14:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:35.526 Initializing NVMe Controllers 00:31:35.526 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:31:35.526 Controller IO queue size 128, less than required. 00:31:35.526 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:35.526 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:31:35.526 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:31:35.526 Initialization complete. Launching workers. 00:31:35.526 ======================================================== 00:31:35.526 Latency(us) 00:31:35.526 Device Information : IOPS MiB/s Average min max 00:31:35.526 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002938.25 1000188.49 1008930.92 00:31:35.526 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003636.13 1000271.03 1008823.45 00:31:35.526 ======================================================== 00:31:35.526 Total : 256.00 0.12 1003287.19 1000188.49 1008930.92 00:31:35.526 00:31:35.786 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:35.786 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 110616 00:31:35.786 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (110616) - No such process 00:31:35.786 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 110616 00:31:35.786 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:31:35.786 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:31:35.786 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:35.786 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:31:35.786 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:35.786 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:31:35.786 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:35.786 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:35.786 rmmod nvme_tcp 00:31:35.786 rmmod nvme_fabrics 00:31:35.786 rmmod nvme_keyring 00:31:35.786 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:35.786 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:31:35.786 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:31:35.786 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 110522 ']' 00:31:35.786 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 110522 00:31:35.786 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' -z 110522 ']' 00:31:35.786 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # kill -0 110522 00:31:35.786 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # uname 00:31:35.786 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:35.786 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 110522 00:31:35.786 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:35.786 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:35.786 killing process with pid 110522 00:31:35.786 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # echo 'killing process with pid 110522' 00:31:35.786 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # kill 110522 00:31:35.786 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@976 -- # wait 110522 00:31:36.722 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:36.722 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:36.722 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:36.722 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:31:36.722 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:31:36.722 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:36.722 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:31:36.722 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:36.722 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:31:36.722 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:31:36.722 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:31:36.722 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:31:36.722 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:31:36.722 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:31:36.723 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:31:36.723 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:31:36.723 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:31:36.723 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:31:36.723 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:31:36.723 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:31:36.723 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:36.723 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:36.723 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@246 -- # remove_spdk_ns 00:31:36.723 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:36.723 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:36.723 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:36.723 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@300 -- # return 0 00:31:36.723 00:31:36.723 real 0m9.828s 00:31:36.723 user 0m25.148s 00:31:36.723 sys 0m1.929s 00:31:36.723 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:36.723 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:36.723 ************************************ 00:31:36.723 END TEST nvmf_delete_subsystem 00:31:36.723 ************************************ 00:31:36.723 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:31:36.723 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:31:36.723 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:36.723 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:36.723 ************************************ 00:31:36.723 START TEST nvmf_host_management 00:31:36.723 ************************************ 00:31:36.723 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:31:36.982 * Looking for test storage... 00:31:36.982 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:31:36.982 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:36.982 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:36.982 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:31:36.982 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:36.982 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:36.982 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:36.982 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:36.982 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:31:36.982 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:31:36.982 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:31:36.982 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:31:36.982 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:31:36.982 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:31:36.982 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:31:36.982 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:36.982 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:31:36.982 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:31:36.982 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:36.982 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:36.982 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:31:36.982 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:31:36.982 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:36.982 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:31:36.982 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:31:36.982 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:31:36.982 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:31:36.982 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:36.982 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:31:36.982 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:31:36.982 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:36.982 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:36.982 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:31:36.982 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:36.982 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:36.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:36.982 --rc genhtml_branch_coverage=1 00:31:36.982 --rc genhtml_function_coverage=1 00:31:36.982 --rc genhtml_legend=1 00:31:36.982 --rc geninfo_all_blocks=1 00:31:36.982 --rc geninfo_unexecuted_blocks=1 00:31:36.982 00:31:36.982 ' 00:31:36.982 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:36.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:36.982 --rc genhtml_branch_coverage=1 00:31:36.982 --rc genhtml_function_coverage=1 00:31:36.982 --rc genhtml_legend=1 00:31:36.982 --rc geninfo_all_blocks=1 00:31:36.982 --rc geninfo_unexecuted_blocks=1 00:31:36.982 00:31:36.982 ' 00:31:36.982 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:36.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:36.982 --rc genhtml_branch_coverage=1 00:31:36.982 --rc genhtml_function_coverage=1 00:31:36.982 --rc genhtml_legend=1 00:31:36.982 --rc geninfo_all_blocks=1 00:31:36.982 --rc geninfo_unexecuted_blocks=1 00:31:36.982 00:31:36.982 ' 00:31:36.982 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:36.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:36.982 --rc genhtml_branch_coverage=1 00:31:36.982 --rc genhtml_function_coverage=1 00:31:36.982 --rc genhtml_legend=1 00:31:36.982 --rc geninfo_all_blocks=1 00:31:36.982 --rc geninfo_unexecuted_blocks=1 00:31:36.982 00:31:36.982 ' 00:31:36.982 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:36.982 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:31:36.982 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:36.982 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:36.982 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:36.982 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:36.983 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:36.983 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:36.983 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:36.983 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:36.983 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:36.983 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:36.983 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:31:36.983 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:31:36.983 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:36.983 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:36.983 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:36.983 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:36.983 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:36.983 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:31:36.983 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:36.983 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:36.983 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:36.983 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:36.983 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:36.983 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:36.983 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:31:36.983 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:36.983 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:31:36.983 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:36.983 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:36.983 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:36.983 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:36.983 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:36.983 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:36.983 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:36.983 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:36.983 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:36.983 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:36.983 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:36.983 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:36.983 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:31:36.983 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:36.983 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:36.983 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:36.983 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:36.983 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:36.983 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:36.983 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:36.983 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:36.983 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:31:36.983 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:31:36.983 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:31:36.983 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:31:36.983 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:31:36.983 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:31:36.983 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:36.983 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:31:36.983 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:31:36.983 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:31:36.983 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:36.983 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:31:36.983 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:31:36.983 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:31:36.983 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:31:36.983 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:31:36.983 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:31:36.983 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:36.983 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:31:36.983 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:31:36.983 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:31:36.983 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:31:36.983 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:31:36.983 Cannot find device "nvmf_init_br" 00:31:36.983 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:31:36.983 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:31:36.983 Cannot find device "nvmf_init_br2" 00:31:36.983 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:31:36.984 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:31:36.984 Cannot find device "nvmf_tgt_br" 00:31:36.984 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:31:36.984 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:31:36.984 Cannot find device "nvmf_tgt_br2" 00:31:36.984 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:31:36.984 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:31:36.984 Cannot find device "nvmf_init_br" 00:31:36.984 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:31:36.984 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:31:36.984 Cannot find device "nvmf_init_br2" 00:31:36.984 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:31:36.984 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:31:36.984 Cannot find device "nvmf_tgt_br" 00:31:36.984 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:31:36.984 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:31:36.984 Cannot find device "nvmf_tgt_br2" 00:31:36.984 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:31:36.984 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:31:36.984 Cannot find device "nvmf_br" 00:31:36.984 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:31:36.984 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:31:36.984 Cannot find device "nvmf_init_if" 00:31:36.984 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:31:36.984 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:31:36.984 Cannot find device "nvmf_init_if2" 00:31:36.984 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:31:36.984 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:36.984 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:36.984 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:31:36.984 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:36.984 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:36.984 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:31:36.984 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:31:36.984 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:31:36.984 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:31:36.984 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:31:36.984 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:31:37.244 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:31:37.244 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:31:37.244 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:31:37.244 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:31:37.244 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:31:37.244 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:31:37.244 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:31:37.244 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:31:37.244 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:31:37.244 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:31:37.244 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:31:37.244 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:31:37.244 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:31:37.244 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:31:37.244 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:31:37.244 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:31:37.244 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:31:37.244 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:31:37.244 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:31:37.244 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:31:37.244 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:31:37.244 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:31:37.244 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:31:37.244 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:31:37.244 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:31:37.244 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:31:37.244 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:31:37.244 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:31:37.244 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:31:37.244 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:31:37.244 00:31:37.244 --- 10.0.0.3 ping statistics --- 00:31:37.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:37.244 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:31:37.244 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:31:37.244 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:31:37.244 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.089 ms 00:31:37.244 00:31:37.244 --- 10.0.0.4 ping statistics --- 00:31:37.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:37.244 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:31:37.244 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:31:37.244 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:37.245 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:31:37.245 00:31:37.245 --- 10.0.0.1 ping statistics --- 00:31:37.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:37.245 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:31:37.245 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:31:37.245 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:37.245 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:31:37.245 00:31:37.245 --- 10.0.0.2 ping statistics --- 00:31:37.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:37.245 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:31:37.245 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:37.245 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:31:37.245 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:37.245 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:37.245 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:37.245 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:37.245 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:37.245 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:37.245 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:37.245 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:31:37.245 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:31:37.245 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:31:37.245 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:37.245 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:37.245 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:37.245 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=110904 00:31:37.245 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 110904 00:31:37.245 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 110904 ']' 00:31:37.245 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:37.245 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:37.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:37.245 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:37.245 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:31:37.245 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:37.245 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:37.245 [2024-11-06 14:36:04.849054] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:37.245 [2024-11-06 14:36:04.850967] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:31:37.245 [2024-11-06 14:36:04.851051] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:37.503 [2024-11-06 14:36:05.010113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:37.504 [2024-11-06 14:36:05.093227] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:37.504 [2024-11-06 14:36:05.093371] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:37.504 [2024-11-06 14:36:05.093424] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:37.504 [2024-11-06 14:36:05.093449] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:37.504 [2024-11-06 14:36:05.093464] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:37.504 [2024-11-06 14:36:05.094647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:37.504 [2024-11-06 14:36:05.095102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:37.504 [2024-11-06 14:36:05.095071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:37.504 [2024-11-06 14:36:05.095004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:37.762 [2024-11-06 14:36:05.268662] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:37.762 [2024-11-06 14:36:05.269804] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:37.762 [2024-11-06 14:36:05.269866] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:37.762 [2024-11-06 14:36:05.270517] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:37.762 [2024-11-06 14:36:05.270604] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:38.339 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:38.339 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:31:38.339 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:38.339 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:38.339 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:38.339 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:38.339 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:38.339 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.339 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:38.339 [2024-11-06 14:36:05.772536] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:38.339 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.339 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:31:38.339 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:38.339 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:38.339 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:31:38.339 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:31:38.339 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:31:38.339 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.339 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:38.339 Malloc0 00:31:38.339 [2024-11-06 14:36:05.892443] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:31:38.339 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.339 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:31:38.339 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:38.339 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:38.339 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=110982 00:31:38.339 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 110982 /var/tmp/bdevperf.sock 00:31:38.339 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 110982 ']' 00:31:38.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:38.339 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:38.339 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:38.339 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:38.339 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:38.339 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:38.339 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:31:38.340 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:31:38.340 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:31:38.340 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:31:38.340 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:38.340 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:38.340 { 00:31:38.340 "params": { 00:31:38.340 "name": "Nvme$subsystem", 00:31:38.340 "trtype": "$TEST_TRANSPORT", 00:31:38.340 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:38.340 "adrfam": "ipv4", 00:31:38.340 "trsvcid": "$NVMF_PORT", 00:31:38.340 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:38.340 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:38.340 "hdgst": ${hdgst:-false}, 00:31:38.340 "ddgst": ${ddgst:-false} 00:31:38.340 }, 00:31:38.340 "method": "bdev_nvme_attach_controller" 00:31:38.340 } 00:31:38.340 EOF 00:31:38.340 )") 00:31:38.340 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:31:38.340 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:31:38.340 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:31:38.340 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:38.340 "params": { 00:31:38.340 "name": "Nvme0", 00:31:38.340 "trtype": "tcp", 00:31:38.340 "traddr": "10.0.0.3", 00:31:38.340 "adrfam": "ipv4", 00:31:38.340 "trsvcid": "4420", 00:31:38.340 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:38.340 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:38.340 "hdgst": false, 00:31:38.340 "ddgst": false 00:31:38.340 }, 00:31:38.340 "method": "bdev_nvme_attach_controller" 00:31:38.340 }' 00:31:38.598 [2024-11-06 14:36:05.998473] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:31:38.598 [2024-11-06 14:36:05.998582] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110982 ] 00:31:38.598 [2024-11-06 14:36:06.150218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:38.856 [2024-11-06 14:36:06.236719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:39.114 Running I/O for 10 seconds... 00:31:39.374 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:39.374 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:31:39.374 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:31:39.374 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.374 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:39.374 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.375 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:39.375 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:31:39.375 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:31:39.375 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:31:39.375 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:31:39.375 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:31:39.375 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:31:39.375 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:31:39.375 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:31:39.375 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:31:39.375 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.375 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:39.375 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.375 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=653 00:31:39.375 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 653 -ge 100 ']' 00:31:39.375 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:31:39.375 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:31:39.375 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:31:39.375 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:31:39.375 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.375 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:39.375 [2024-11-06 14:36:06.888106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:31:39.375 [2024-11-06 14:36:06.888280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:31:39.375 [2024-11-06 14:36:06.888382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:31:39.375 [2024-11-06 14:36:06.888413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:31:39.375 [2024-11-06 14:36:06.888475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:31:39.375 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.375 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:31:39.375 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.375 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:39.375 [2024-11-06 14:36:06.893504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:39.375 [2024-11-06 14:36:06.893712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.375 [2024-11-06 14:36:06.893785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:39.375 [2024-11-06 14:36:06.893811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.375 [2024-11-06 14:36:06.893835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:39.375 [2024-11-06 14:36:06.893887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.375 [2024-11-06 14:36:06.893914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:39.375 [2024-11-06 14:36:06.893937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.375 [2024-11-06 14:36:06.893964] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ad80 is same with the state(6) to be set 00:31:39.375 [2024-11-06 14:36:06.894021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.375 [2024-11-06 14:36:06.894045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.375 [2024-11-06 14:36:06.894062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.375 [2024-11-06 14:36:06.894070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.375 [2024-11-06 14:36:06.894079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.375 [2024-11-06 14:36:06.894086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.375 [2024-11-06 14:36:06.894094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.375 [2024-11-06 14:36:06.894101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.375 [2024-11-06 14:36:06.894109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.375 [2024-11-06 14:36:06.894116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.375 [2024-11-06 14:36:06.894124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.375 [2024-11-06 14:36:06.894131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.375 [2024-11-06 14:36:06.894139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.375 [2024-11-06 14:36:06.894145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.375 [2024-11-06 14:36:06.894154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.375 [2024-11-06 14:36:06.894160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.375 [2024-11-06 14:36:06.894168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.375 [2024-11-06 14:36:06.894175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.375 [2024-11-06 14:36:06.894183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.375 [2024-11-06 14:36:06.894190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.375 [2024-11-06 14:36:06.894198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.375 [2024-11-06 14:36:06.894204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.375 [2024-11-06 14:36:06.894213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.375 [2024-11-06 14:36:06.894220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.375 [2024-11-06 14:36:06.894236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.375 [2024-11-06 14:36:06.894242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.375 [2024-11-06 14:36:06.894250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.375 [2024-11-06 14:36:06.894257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.375 [2024-11-06 14:36:06.894265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.375 [2024-11-06 14:36:06.894271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.375 [2024-11-06 14:36:06.894279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.375 [2024-11-06 14:36:06.894286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.375 [2024-11-06 14:36:06.894293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.375 [2024-11-06 14:36:06.894301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.375 [2024-11-06 14:36:06.894309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.375 [2024-11-06 14:36:06.894315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.375 [2024-11-06 14:36:06.894324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.375 [2024-11-06 14:36:06.894333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.375 [2024-11-06 14:36:06.894341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.375 [2024-11-06 14:36:06.894347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.375 [2024-11-06 14:36:06.894355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.375 [2024-11-06 14:36:06.894362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.375 [2024-11-06 14:36:06.894370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.376 [2024-11-06 14:36:06.894376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.376 [2024-11-06 14:36:06.894385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.376 [2024-11-06 14:36:06.894391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.376 [2024-11-06 14:36:06.894399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.376 [2024-11-06 14:36:06.894406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.376 [2024-11-06 14:36:06.894415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.376 [2024-11-06 14:36:06.894421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.376 [2024-11-06 14:36:06.894429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.376 [2024-11-06 14:36:06.894437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.376 [2024-11-06 14:36:06.894445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.376 [2024-11-06 14:36:06.894451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.376 [2024-11-06 14:36:06.894459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.376 [2024-11-06 14:36:06.894466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.376 [2024-11-06 14:36:06.894474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.376 [2024-11-06 14:36:06.894480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.376 [2024-11-06 14:36:06.894488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.376 [2024-11-06 14:36:06.894495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.376 [2024-11-06 14:36:06.894503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.376 [2024-11-06 14:36:06.894509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.376 [2024-11-06 14:36:06.894517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.376 [2024-11-06 14:36:06.894524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.376 [2024-11-06 14:36:06.894532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.376 [2024-11-06 14:36:06.894538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.376 [2024-11-06 14:36:06.894545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.376 [2024-11-06 14:36:06.894553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.376 [2024-11-06 14:36:06.894562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.376 [2024-11-06 14:36:06.894574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.376 [2024-11-06 14:36:06.894583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.376 [2024-11-06 14:36:06.894589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.376 [2024-11-06 14:36:06.894598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.376 [2024-11-06 14:36:06.894604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.376 [2024-11-06 14:36:06.894612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.376 [2024-11-06 14:36:06.894619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.376 [2024-11-06 14:36:06.894626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.376 [2024-11-06 14:36:06.894633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.376 [2024-11-06 14:36:06.894641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.376 [2024-11-06 14:36:06.894648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.376 [2024-11-06 14:36:06.894656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:97280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.376 [2024-11-06 14:36:06.894662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.376 [2024-11-06 14:36:06.894670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.376 [2024-11-06 14:36:06.894677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.376 [2024-11-06 14:36:06.894685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.376 [2024-11-06 14:36:06.894692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.376 [2024-11-06 14:36:06.894699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.376 [2024-11-06 14:36:06.894705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.376 [2024-11-06 14:36:06.894713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.376 [2024-11-06 14:36:06.894720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.376 [2024-11-06 14:36:06.894728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.376 [2024-11-06 14:36:06.894734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.376 [2024-11-06 14:36:06.894742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.376 [2024-11-06 14:36:06.894749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.376 [2024-11-06 14:36:06.894756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.376 [2024-11-06 14:36:06.894762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.376 [2024-11-06 14:36:06.894770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.376 [2024-11-06 14:36:06.894777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.376 [2024-11-06 14:36:06.894785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.376 [2024-11-06 14:36:06.894791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.376 [2024-11-06 14:36:06.894799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:97408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.376 [2024-11-06 14:36:06.894807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.376 [2024-11-06 14:36:06.894815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.376 [2024-11-06 14:36:06.894822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.376 [2024-11-06 14:36:06.894831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.376 [2024-11-06 14:36:06.894837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.376 [2024-11-06 14:36:06.894845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:97792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.376 [2024-11-06 14:36:06.894853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.376 [2024-11-06 14:36:06.894862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:97920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.376 [2024-11-06 14:36:06.894868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.376 [2024-11-06 14:36:06.894877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.376 [2024-11-06 14:36:06.894884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.376 [2024-11-06 14:36:06.894892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.376 [2024-11-06 14:36:06.894898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.376 [2024-11-06 14:36:06.894906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.376 [2024-11-06 14:36:06.894913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.376 [2024-11-06 14:36:06.894921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.376 [2024-11-06 14:36:06.894927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.376 [2024-11-06 14:36:06.894936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.376 [2024-11-06 14:36:06.894942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.376 [2024-11-06 14:36:06.895343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.376 [2024-11-06 14:36:06.895437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.377 [2024-11-06 14:36:06.895467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.377 [2024-11-06 14:36:06.895540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.377 [2024-11-06 14:36:06.895569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.377 [2024-11-06 14:36:06.895594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.377 [2024-11-06 14:36:06.895619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.377 [2024-11-06 14:36:06.895665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.377 [2024-11-06 14:36:06.896849] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:39.377 task offset: 99200 on job bdev=Nvme0n1 fails 00:31:39.377 00:31:39.377 Latency(us) 00:31:39.377 [2024-11-06T14:36:07.012Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:39.377 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:39.377 Job: Nvme0n1 ended in about 0.38 seconds with error 00:31:39.377 Verification LBA range: start 0x0 length 0x400 00:31:39.377 Nvme0n1 : 0.38 1978.70 123.67 166.85 0.00 29057.95 2697.06 28835.84 00:31:39.377 [2024-11-06T14:36:07.012Z] =================================================================================================================== 00:31:39.377 [2024-11-06T14:36:07.012Z] Total : 1978.70 123.67 166.85 0.00 29057.95 2697.06 28835.84 00:31:39.377 [2024-11-06 14:36:06.899988] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:39.377 [2024-11-06 14:36:06.900083] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:31:39.377 [2024-11-06 14:36:06.903181] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:31:39.377 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.377 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:31:40.310 14:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 110982 00:31:40.310 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (110982) - No such process 00:31:40.310 14:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:31:40.310 14:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:31:40.310 14:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:31:40.310 14:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:31:40.310 14:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:31:40.310 14:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:31:40.310 14:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:40.310 14:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:40.310 { 00:31:40.310 "params": { 00:31:40.310 "name": "Nvme$subsystem", 00:31:40.310 "trtype": "$TEST_TRANSPORT", 00:31:40.310 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:40.310 "adrfam": "ipv4", 00:31:40.310 "trsvcid": "$NVMF_PORT", 00:31:40.310 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:40.310 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:40.310 "hdgst": ${hdgst:-false}, 00:31:40.310 "ddgst": ${ddgst:-false} 00:31:40.310 }, 00:31:40.310 "method": "bdev_nvme_attach_controller" 00:31:40.310 } 00:31:40.310 EOF 00:31:40.310 )") 00:31:40.310 14:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:31:40.311 14:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:31:40.311 14:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:31:40.311 14:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:40.311 "params": { 00:31:40.311 "name": "Nvme0", 00:31:40.311 "trtype": "tcp", 00:31:40.311 "traddr": "10.0.0.3", 00:31:40.311 "adrfam": "ipv4", 00:31:40.311 "trsvcid": "4420", 00:31:40.311 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:40.311 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:40.311 "hdgst": false, 00:31:40.311 "ddgst": false 00:31:40.311 }, 00:31:40.311 "method": "bdev_nvme_attach_controller" 00:31:40.311 }' 00:31:40.568 [2024-11-06 14:36:07.978576] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:31:40.568 [2024-11-06 14:36:07.978685] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111027 ] 00:31:40.568 [2024-11-06 14:36:08.134603] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:40.826 [2024-11-06 14:36:08.221682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:41.084 Running I/O for 1 seconds... 00:31:42.019 2240.00 IOPS, 140.00 MiB/s 00:31:42.019 Latency(us) 00:31:42.019 [2024-11-06T14:36:09.654Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:42.019 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:42.019 Verification LBA range: start 0x0 length 0x400 00:31:42.019 Nvme0n1 : 1.01 2273.75 142.11 0.00 0.00 27709.23 3982.57 28432.54 00:31:42.019 [2024-11-06T14:36:09.654Z] =================================================================================================================== 00:31:42.019 [2024-11-06T14:36:09.654Z] Total : 2273.75 142.11 0.00 0.00 27709.23 3982.57 28432.54 00:31:42.584 14:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:31:42.584 14:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:31:42.584 14:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:31:42.584 14:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:31:42.584 14:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:31:42.584 14:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:42.584 14:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:31:42.584 14:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:42.584 14:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:31:42.584 14:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:42.584 14:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:42.584 rmmod nvme_tcp 00:31:42.584 rmmod nvme_fabrics 00:31:42.584 rmmod nvme_keyring 00:31:42.584 14:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:42.585 14:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:31:42.585 14:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:31:42.585 14:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 110904 ']' 00:31:42.585 14:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 110904 00:31:42.585 14:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 110904 ']' 00:31:42.585 14:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 110904 00:31:42.585 14:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:31:42.585 14:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:42.585 14:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 110904 00:31:42.843 killing process with pid 110904 00:31:42.843 14:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:31:42.843 14:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:31:42.843 14:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 110904' 00:31:42.843 14:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 110904 00:31:42.843 14:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 110904 00:31:43.411 [2024-11-06 14:36:10.820602] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:31:43.411 14:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:43.411 14:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:43.411 14:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:43.411 14:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:31:43.411 14:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:31:43.411 14:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:43.411 14:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:31:43.411 14:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:43.411 14:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:31:43.411 14:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:31:43.411 14:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:31:43.411 14:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:31:43.411 14:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:31:43.411 14:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:31:43.411 14:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:31:43.411 14:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:31:43.411 14:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:31:43.411 14:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:31:43.411 14:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:31:43.411 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:31:43.411 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:43.671 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:43.671 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:31:43.671 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:43.671 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:43.671 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:43.671 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:31:43.671 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:31:43.671 00:31:43.671 real 0m6.806s 00:31:43.671 user 0m21.034s 00:31:43.671 sys 0m2.243s 00:31:43.671 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:43.671 ************************************ 00:31:43.671 END TEST nvmf_host_management 00:31:43.671 ************************************ 00:31:43.671 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:43.671 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:31:43.671 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:31:43.671 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:43.671 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:43.671 ************************************ 00:31:43.671 START TEST nvmf_lvol 00:31:43.671 ************************************ 00:31:43.671 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:31:43.671 * Looking for test storage... 00:31:43.671 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:31:43.671 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:43.671 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:31:43.671 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:43.671 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:43.671 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:43.671 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:43.671 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:43.671 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:31:43.671 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:31:43.671 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:31:43.671 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:31:43.671 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:31:43.671 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:31:43.671 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:31:43.671 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:43.671 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:31:43.671 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:31:43.672 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:43.672 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:43.672 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:31:43.672 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:31:43.672 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:43.672 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:31:43.672 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:31:43.672 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:31:43.672 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:31:43.672 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:43.672 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:31:43.672 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:31:43.672 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:43.672 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:43.672 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:31:43.672 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:43.672 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:43.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:43.672 --rc genhtml_branch_coverage=1 00:31:43.672 --rc genhtml_function_coverage=1 00:31:43.672 --rc genhtml_legend=1 00:31:43.672 --rc geninfo_all_blocks=1 00:31:43.672 --rc geninfo_unexecuted_blocks=1 00:31:43.672 00:31:43.672 ' 00:31:43.672 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:43.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:43.672 --rc genhtml_branch_coverage=1 00:31:43.672 --rc genhtml_function_coverage=1 00:31:43.672 --rc genhtml_legend=1 00:31:43.672 --rc geninfo_all_blocks=1 00:31:43.672 --rc geninfo_unexecuted_blocks=1 00:31:43.672 00:31:43.672 ' 00:31:43.672 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:43.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:43.672 --rc genhtml_branch_coverage=1 00:31:43.672 --rc genhtml_function_coverage=1 00:31:43.672 --rc genhtml_legend=1 00:31:43.672 --rc geninfo_all_blocks=1 00:31:43.672 --rc geninfo_unexecuted_blocks=1 00:31:43.672 00:31:43.672 ' 00:31:43.672 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:43.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:43.672 --rc genhtml_branch_coverage=1 00:31:43.672 --rc genhtml_function_coverage=1 00:31:43.672 --rc genhtml_legend=1 00:31:43.672 --rc geninfo_all_blocks=1 00:31:43.672 --rc geninfo_unexecuted_blocks=1 00:31:43.672 00:31:43.672 ' 00:31:43.672 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:43.672 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:31:43.672 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:43.672 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:43.672 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:43.672 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:43.672 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:43.672 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:43.672 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:43.672 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:43.672 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:43.672 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:43.932 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:31:43.932 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:31:43.932 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:43.932 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:43.932 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:43.932 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:43.932 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:43.932 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:31:43.932 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:43.932 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:43.932 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:43.932 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:43.932 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:43.932 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:43.932 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:31:43.932 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:43.932 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:31:43.932 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:43.932 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:43.932 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:43.932 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:43.932 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:43.932 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:43.932 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:43.932 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:43.932 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:43.932 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:43.932 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:43.932 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:43.932 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:31:43.932 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:31:43.933 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:43.933 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:31:43.933 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:43.933 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:43.933 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:43.933 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:43.933 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:43.933 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:43.933 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:43.933 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:43.933 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:31:43.933 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:31:43.933 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:31:43.933 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:31:43.933 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:31:43.933 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:31:43.933 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:43.933 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:31:43.933 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:31:43.933 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:31:43.933 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:43.933 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:31:43.933 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:31:43.933 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:31:43.933 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:31:43.933 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:31:43.933 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:31:43.933 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:43.933 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:31:43.933 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:31:43.933 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:31:43.933 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:31:43.933 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:31:43.933 Cannot find device "nvmf_init_br" 00:31:43.933 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:31:43.933 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:31:43.933 Cannot find device "nvmf_init_br2" 00:31:43.933 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:31:43.933 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:31:43.933 Cannot find device "nvmf_tgt_br" 00:31:43.933 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:31:43.933 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:31:43.933 Cannot find device "nvmf_tgt_br2" 00:31:43.933 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:31:43.933 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:31:43.933 Cannot find device "nvmf_init_br" 00:31:43.933 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:31:43.933 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:31:43.933 Cannot find device "nvmf_init_br2" 00:31:43.933 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:31:43.933 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:31:43.933 Cannot find device "nvmf_tgt_br" 00:31:43.933 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:31:43.933 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:31:43.933 Cannot find device "nvmf_tgt_br2" 00:31:43.933 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:31:43.933 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:31:43.933 Cannot find device "nvmf_br" 00:31:43.933 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:31:43.933 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:31:43.933 Cannot find device "nvmf_init_if" 00:31:43.933 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:31:43.933 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:31:43.933 Cannot find device "nvmf_init_if2" 00:31:43.933 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:31:43.933 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:43.933 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:43.933 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:31:43.933 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:43.933 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:43.933 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:31:43.933 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:31:43.933 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:31:43.933 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:31:43.933 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:31:43.933 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:31:43.933 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:31:43.933 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:31:43.933 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:31:43.933 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:31:43.933 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:31:43.933 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:31:43.933 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:31:43.933 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:31:43.933 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:31:44.193 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:31:44.193 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:31:44.193 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:31:44.193 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:31:44.193 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:31:44.193 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:31:44.193 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:31:44.193 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:31:44.193 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:31:44.193 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:31:44.193 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:31:44.193 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:31:44.194 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:31:44.194 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:31:44.194 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:31:44.194 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:31:44.194 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:31:44.194 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:31:44.194 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:31:44.194 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:31:44.194 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.109 ms 00:31:44.194 00:31:44.194 --- 10.0.0.3 ping statistics --- 00:31:44.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:44.194 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:31:44.194 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:31:44.194 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:31:44.194 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.061 ms 00:31:44.194 00:31:44.194 --- 10.0.0.4 ping statistics --- 00:31:44.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:44.194 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:31:44.194 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:31:44.194 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:44.194 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:31:44.194 00:31:44.194 --- 10.0.0.1 ping statistics --- 00:31:44.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:44.194 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:31:44.194 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:31:44.194 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:44.194 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:31:44.194 00:31:44.194 --- 10.0.0.2 ping statistics --- 00:31:44.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:44.194 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:31:44.194 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:44.194 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:31:44.194 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:44.194 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:44.194 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:44.194 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:44.194 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:44.194 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:44.194 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:44.194 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:31:44.194 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:44.194 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:44.194 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:44.194 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=111300 00:31:44.194 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:31:44.194 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 111300 00:31:44.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:44.194 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 111300 ']' 00:31:44.194 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:44.194 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:44.194 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:44.194 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:44.194 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:44.194 [2024-11-06 14:36:11.794438] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:44.194 [2024-11-06 14:36:11.796513] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:31:44.194 [2024-11-06 14:36:11.796613] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:44.453 [2024-11-06 14:36:11.962871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:44.453 [2024-11-06 14:36:12.081663] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:44.453 [2024-11-06 14:36:12.081708] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:44.453 [2024-11-06 14:36:12.081717] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:44.453 [2024-11-06 14:36:12.081726] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:44.453 [2024-11-06 14:36:12.081734] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:44.453 [2024-11-06 14:36:12.083181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:44.453 [2024-11-06 14:36:12.083778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:44.453 [2024-11-06 14:36:12.083797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:44.713 [2024-11-06 14:36:12.304405] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:44.713 [2024-11-06 14:36:12.304734] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:44.713 [2024-11-06 14:36:12.305192] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:44.713 [2024-11-06 14:36:12.306201] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:44.973 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:44.973 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:31:44.973 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:44.973 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:44.973 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:45.233 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:45.234 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:45.234 [2024-11-06 14:36:12.780926] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:45.234 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:45.493 14:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:31:45.493 14:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:45.752 14:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:31:45.752 14:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:31:46.011 14:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:31:46.270 14:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=eddcae6f-2118-47c7-9638-2895ca097c0b 00:31:46.270 14:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u eddcae6f-2118-47c7-9638-2895ca097c0b lvol 20 00:31:46.530 14:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=21bdceef-b24d-431b-81c4-b19872a41b57 00:31:46.530 14:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:46.791 14:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 21bdceef-b24d-431b-81c4-b19872a41b57 00:31:46.791 14:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:31:47.051 [2024-11-06 14:36:14.532859] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:31:47.051 14:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:31:47.311 14:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=111442 00:31:47.311 14:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:31:47.311 14:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:31:48.244 14:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 21bdceef-b24d-431b-81c4-b19872a41b57 MY_SNAPSHOT 00:31:48.502 14:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=1aadb475-8e57-4a4d-9124-a9234cfeee1c 00:31:48.502 14:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 21bdceef-b24d-431b-81c4-b19872a41b57 30 00:31:48.759 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 1aadb475-8e57-4a4d-9124-a9234cfeee1c MY_CLONE 00:31:49.018 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=460bd9d7-8331-46f0-b91e-083660ded231 00:31:49.018 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 460bd9d7-8331-46f0-b91e-083660ded231 00:31:49.276 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 111442 00:31:57.428 Initializing NVMe Controllers 00:31:57.428 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:31:57.428 Controller IO queue size 128, less than required. 00:31:57.428 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:57.428 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:31:57.428 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:31:57.428 Initialization complete. Launching workers. 00:31:57.428 ======================================================== 00:31:57.428 Latency(us) 00:31:57.428 Device Information : IOPS MiB/s Average min max 00:31:57.428 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 15509.30 60.58 8254.11 265.58 83106.77 00:31:57.428 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 14939.80 58.36 8567.62 3879.22 99852.46 00:31:57.428 ======================================================== 00:31:57.428 Total : 30449.10 118.94 8407.93 265.58 99852.46 00:31:57.428 00:31:57.428 14:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:57.686 14:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 21bdceef-b24d-431b-81c4-b19872a41b57 00:31:57.945 14:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u eddcae6f-2118-47c7-9638-2895ca097c0b 00:31:58.204 14:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:31:58.204 14:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:31:58.204 14:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:31:58.204 14:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:58.204 14:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:31:58.204 14:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:58.204 14:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:31:58.204 14:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:58.204 14:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:58.204 rmmod nvme_tcp 00:31:58.204 rmmod nvme_fabrics 00:31:58.204 rmmod nvme_keyring 00:31:58.204 14:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:58.204 14:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:31:58.204 14:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:31:58.204 14:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 111300 ']' 00:31:58.204 14:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 111300 00:31:58.204 14:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 111300 ']' 00:31:58.204 14:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 111300 00:31:58.204 14:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:31:58.204 14:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:58.204 14:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 111300 00:31:58.204 killing process with pid 111300 00:31:58.204 14:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:58.204 14:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:58.204 14:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 111300' 00:31:58.204 14:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 111300 00:31:58.204 14:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 111300 00:31:59.141 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:59.141 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:59.141 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:59.141 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:31:59.141 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:31:59.141 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:59.141 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:31:59.141 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:59.141 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:31:59.141 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:31:59.141 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:31:59.141 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:31:59.141 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:31:59.141 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:31:59.141 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:31:59.141 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:31:59.141 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:31:59.141 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:31:59.141 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:31:59.141 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:31:59.141 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:59.141 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:59.141 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:31:59.141 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:59.141 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:59.141 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:59.141 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:31:59.141 00:31:59.141 real 0m15.604s 00:31:59.141 user 0m55.149s 00:31:59.141 sys 0m4.849s 00:31:59.141 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:59.141 ************************************ 00:31:59.141 END TEST nvmf_lvol 00:31:59.141 ************************************ 00:31:59.141 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:59.402 ************************************ 00:31:59.402 START TEST nvmf_lvs_grow 00:31:59.402 ************************************ 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:31:59.402 * Looking for test storage... 00:31:59.402 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:59.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:59.402 --rc genhtml_branch_coverage=1 00:31:59.402 --rc genhtml_function_coverage=1 00:31:59.402 --rc genhtml_legend=1 00:31:59.402 --rc geninfo_all_blocks=1 00:31:59.402 --rc geninfo_unexecuted_blocks=1 00:31:59.402 00:31:59.402 ' 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:59.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:59.402 --rc genhtml_branch_coverage=1 00:31:59.402 --rc genhtml_function_coverage=1 00:31:59.402 --rc genhtml_legend=1 00:31:59.402 --rc geninfo_all_blocks=1 00:31:59.402 --rc geninfo_unexecuted_blocks=1 00:31:59.402 00:31:59.402 ' 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:59.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:59.402 --rc genhtml_branch_coverage=1 00:31:59.402 --rc genhtml_function_coverage=1 00:31:59.402 --rc genhtml_legend=1 00:31:59.402 --rc geninfo_all_blocks=1 00:31:59.402 --rc geninfo_unexecuted_blocks=1 00:31:59.402 00:31:59.402 ' 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:59.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:59.402 --rc genhtml_branch_coverage=1 00:31:59.402 --rc genhtml_function_coverage=1 00:31:59.402 --rc genhtml_legend=1 00:31:59.402 --rc geninfo_all_blocks=1 00:31:59.402 --rc geninfo_unexecuted_blocks=1 00:31:59.402 00:31:59.402 ' 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:31:59.402 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:31:59.403 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:59.403 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:31:59.403 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:31:59.403 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:31:59.403 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:31:59.403 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:31:59.403 Cannot find device "nvmf_init_br" 00:31:59.403 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:31:59.403 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:31:59.403 Cannot find device "nvmf_init_br2" 00:31:59.403 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:31:59.403 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:31:59.403 Cannot find device "nvmf_tgt_br" 00:31:59.403 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:31:59.403 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:31:59.661 Cannot find device "nvmf_tgt_br2" 00:31:59.661 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:31:59.661 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:31:59.661 Cannot find device "nvmf_init_br" 00:31:59.661 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:31:59.661 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:31:59.661 Cannot find device "nvmf_init_br2" 00:31:59.661 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:31:59.661 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:31:59.661 Cannot find device "nvmf_tgt_br" 00:31:59.661 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:31:59.661 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:31:59.661 Cannot find device "nvmf_tgt_br2" 00:31:59.661 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:31:59.661 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:31:59.661 Cannot find device "nvmf_br" 00:31:59.661 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:31:59.661 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:31:59.661 Cannot find device "nvmf_init_if" 00:31:59.661 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:31:59.661 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:31:59.661 Cannot find device "nvmf_init_if2" 00:31:59.661 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:31:59.661 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:59.661 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:59.661 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:31:59.661 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:59.661 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:59.661 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:31:59.661 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:31:59.661 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:31:59.661 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:31:59.661 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:31:59.661 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:31:59.661 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:31:59.661 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:31:59.661 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:31:59.661 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:31:59.661 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:31:59.661 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:31:59.661 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:31:59.661 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:31:59.661 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:31:59.661 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:31:59.661 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:31:59.661 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:31:59.661 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:31:59.661 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:31:59.661 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:31:59.661 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:31:59.661 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:31:59.661 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:31:59.661 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:31:59.661 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:31:59.662 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:31:59.923 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:31:59.923 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:31:59.923 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:31:59.924 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:31:59.924 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:31:59.924 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:31:59.924 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:31:59.924 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:31:59.924 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:31:59.924 00:31:59.924 --- 10.0.0.3 ping statistics --- 00:31:59.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:59.924 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:31:59.924 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:31:59.924 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:31:59.924 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.064 ms 00:31:59.924 00:31:59.924 --- 10.0.0.4 ping statistics --- 00:31:59.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:59.924 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:31:59.924 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:31:59.924 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:59.924 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:31:59.924 00:31:59.924 --- 10.0.0.1 ping statistics --- 00:31:59.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:59.924 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:31:59.924 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:31:59.924 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:59.924 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:31:59.924 00:31:59.924 --- 10.0.0.2 ping statistics --- 00:31:59.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:59.924 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:31:59.924 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:59.924 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:31:59.924 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:59.924 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:59.924 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:59.924 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:59.924 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:59.924 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:59.924 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:59.924 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:31:59.924 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:59.924 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:59.924 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:59.924 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=111860 00:31:59.924 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 111860 00:31:59.924 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 111860 ']' 00:31:59.924 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:59.924 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:31:59.924 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:59.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:59.924 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:59.924 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:59.924 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:59.924 [2024-11-06 14:36:27.423216] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:59.924 [2024-11-06 14:36:27.425232] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:31:59.924 [2024-11-06 14:36:27.425436] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:00.185 [2024-11-06 14:36:27.585506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:00.185 [2024-11-06 14:36:27.694081] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:00.185 [2024-11-06 14:36:27.694129] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:00.185 [2024-11-06 14:36:27.694147] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:00.185 [2024-11-06 14:36:27.694156] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:00.185 [2024-11-06 14:36:27.694164] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:00.185 [2024-11-06 14:36:27.695031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:00.447 [2024-11-06 14:36:27.938259] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:00.447 [2024-11-06 14:36:27.938634] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:00.708 14:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:00.708 14:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:32:00.708 14:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:00.708 14:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:00.708 14:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:00.708 14:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:00.708 14:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:00.968 [2024-11-06 14:36:28.500058] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:00.969 14:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:32:00.969 14:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:32:00.969 14:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:00.969 14:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:00.969 ************************************ 00:32:00.969 START TEST lvs_grow_clean 00:32:00.969 ************************************ 00:32:00.969 14:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:32:00.969 14:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:32:00.969 14:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:32:00.969 14:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:32:00.969 14:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:32:00.969 14:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:32:00.969 14:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:32:00.969 14:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:32:00.969 14:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:32:00.969 14:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:01.229 14:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:32:01.229 14:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:32:01.490 14:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=f5da63b3-e90a-4205-be57-a44639827627 00:32:01.490 14:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f5da63b3-e90a-4205-be57-a44639827627 00:32:01.490 14:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:32:01.752 14:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:32:01.752 14:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:32:01.752 14:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u f5da63b3-e90a-4205-be57-a44639827627 lvol 150 00:32:02.012 14:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=0c0741ed-ee70-41fe-83f5-f53458ab3185 00:32:02.012 14:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:32:02.012 14:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:32:02.272 [2024-11-06 14:36:29.655796] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:32:02.272 [2024-11-06 14:36:29.655984] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:32:02.272 true 00:32:02.272 14:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f5da63b3-e90a-4205-be57-a44639827627 00:32:02.272 14:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:32:02.272 14:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:32:02.272 14:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:02.531 14:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0c0741ed-ee70-41fe-83f5-f53458ab3185 00:32:02.793 14:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:32:03.054 [2024-11-06 14:36:30.512387] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:32:03.054 14:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:32:03.315 14:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:32:03.315 14:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=112015 00:32:03.315 14:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:03.315 14:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 112015 /var/tmp/bdevperf.sock 00:32:03.315 14:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 112015 ']' 00:32:03.315 14:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:03.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:03.315 14:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:03.315 14:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:03.315 14:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:03.315 14:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:32:03.315 [2024-11-06 14:36:30.808481] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:32:03.315 [2024-11-06 14:36:30.808619] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112015 ] 00:32:03.575 [2024-11-06 14:36:30.974279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:03.575 [2024-11-06 14:36:31.091905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:04.148 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:04.148 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:32:04.148 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:32:04.409 Nvme0n1 00:32:04.409 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:32:04.670 [ 00:32:04.670 { 00:32:04.670 "aliases": [ 00:32:04.670 "0c0741ed-ee70-41fe-83f5-f53458ab3185" 00:32:04.670 ], 00:32:04.670 "assigned_rate_limits": { 00:32:04.670 "r_mbytes_per_sec": 0, 00:32:04.670 "rw_ios_per_sec": 0, 00:32:04.670 "rw_mbytes_per_sec": 0, 00:32:04.670 "w_mbytes_per_sec": 0 00:32:04.670 }, 00:32:04.670 "block_size": 4096, 00:32:04.670 "claimed": false, 00:32:04.670 "driver_specific": { 00:32:04.670 "mp_policy": "active_passive", 00:32:04.670 "nvme": [ 00:32:04.670 { 00:32:04.670 "ctrlr_data": { 00:32:04.670 "ana_reporting": false, 00:32:04.670 "cntlid": 1, 00:32:04.670 "firmware_revision": "25.01", 00:32:04.670 "model_number": "SPDK bdev Controller", 00:32:04.670 "multi_ctrlr": true, 00:32:04.670 "oacs": { 00:32:04.670 "firmware": 0, 00:32:04.670 "format": 0, 00:32:04.670 "ns_manage": 0, 00:32:04.670 "security": 0 00:32:04.670 }, 00:32:04.670 "serial_number": "SPDK0", 00:32:04.670 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:04.670 "vendor_id": "0x8086" 00:32:04.670 }, 00:32:04.670 "ns_data": { 00:32:04.670 "can_share": true, 00:32:04.670 "id": 1 00:32:04.670 }, 00:32:04.670 "trid": { 00:32:04.670 "adrfam": "IPv4", 00:32:04.670 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:04.670 "traddr": "10.0.0.3", 00:32:04.670 "trsvcid": "4420", 00:32:04.670 "trtype": "TCP" 00:32:04.670 }, 00:32:04.670 "vs": { 00:32:04.670 "nvme_version": "1.3" 00:32:04.670 } 00:32:04.670 } 00:32:04.670 ] 00:32:04.670 }, 00:32:04.670 "memory_domains": [ 00:32:04.670 { 00:32:04.670 "dma_device_id": "system", 00:32:04.670 "dma_device_type": 1 00:32:04.670 } 00:32:04.670 ], 00:32:04.670 "name": "Nvme0n1", 00:32:04.670 "num_blocks": 38912, 00:32:04.670 "numa_id": -1, 00:32:04.670 "product_name": "NVMe disk", 00:32:04.670 "supported_io_types": { 00:32:04.670 "abort": true, 00:32:04.670 "compare": true, 00:32:04.670 "compare_and_write": true, 00:32:04.670 "copy": true, 00:32:04.670 "flush": true, 00:32:04.670 "get_zone_info": false, 00:32:04.670 "nvme_admin": true, 00:32:04.670 "nvme_io": true, 00:32:04.670 "nvme_io_md": false, 00:32:04.670 "nvme_iov_md": false, 00:32:04.670 "read": true, 00:32:04.670 "reset": true, 00:32:04.670 "seek_data": false, 00:32:04.670 "seek_hole": false, 00:32:04.670 "unmap": true, 00:32:04.670 "write": true, 00:32:04.670 "write_zeroes": true, 00:32:04.670 "zcopy": false, 00:32:04.670 "zone_append": false, 00:32:04.670 "zone_management": false 00:32:04.670 }, 00:32:04.670 "uuid": "0c0741ed-ee70-41fe-83f5-f53458ab3185", 00:32:04.670 "zoned": false 00:32:04.670 } 00:32:04.670 ] 00:32:04.670 14:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=112057 00:32:04.670 14:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:32:04.670 14:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:04.930 Running I/O for 10 seconds... 00:32:05.864 Latency(us) 00:32:05.864 [2024-11-06T14:36:33.499Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:05.864 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:05.864 Nvme0n1 : 1.00 9146.00 35.73 0.00 0.00 0.00 0.00 0.00 00:32:05.864 [2024-11-06T14:36:33.499Z] =================================================================================================================== 00:32:05.864 [2024-11-06T14:36:33.499Z] Total : 9146.00 35.73 0.00 0.00 0.00 0.00 0.00 00:32:05.864 00:32:06.799 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f5da63b3-e90a-4205-be57-a44639827627 00:32:06.799 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:06.799 Nvme0n1 : 2.00 9812.50 38.33 0.00 0.00 0.00 0.00 0.00 00:32:06.799 [2024-11-06T14:36:34.434Z] =================================================================================================================== 00:32:06.799 [2024-11-06T14:36:34.434Z] Total : 9812.50 38.33 0.00 0.00 0.00 0.00 0.00 00:32:06.799 00:32:06.799 true 00:32:07.058 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f5da63b3-e90a-4205-be57-a44639827627 00:32:07.058 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:32:07.058 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:32:07.058 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:32:07.058 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 112057 00:32:08.026 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:08.026 Nvme0n1 : 3.00 10698.33 41.79 0.00 0.00 0.00 0.00 0.00 00:32:08.026 [2024-11-06T14:36:35.661Z] =================================================================================================================== 00:32:08.026 [2024-11-06T14:36:35.661Z] Total : 10698.33 41.79 0.00 0.00 0.00 0.00 0.00 00:32:08.026 00:32:08.961 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:08.961 Nvme0n1 : 4.00 11103.75 43.37 0.00 0.00 0.00 0.00 0.00 00:32:08.961 [2024-11-06T14:36:36.596Z] =================================================================================================================== 00:32:08.961 [2024-11-06T14:36:36.596Z] Total : 11103.75 43.37 0.00 0.00 0.00 0.00 0.00 00:32:08.961 00:32:09.895 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:09.895 Nvme0n1 : 5.00 11279.20 44.06 0.00 0.00 0.00 0.00 0.00 00:32:09.895 [2024-11-06T14:36:37.530Z] =================================================================================================================== 00:32:09.895 [2024-11-06T14:36:37.530Z] Total : 11279.20 44.06 0.00 0.00 0.00 0.00 0.00 00:32:09.896 00:32:10.830 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:10.830 Nvme0n1 : 6.00 11399.50 44.53 0.00 0.00 0.00 0.00 0.00 00:32:10.830 [2024-11-06T14:36:38.465Z] =================================================================================================================== 00:32:10.830 [2024-11-06T14:36:38.465Z] Total : 11399.50 44.53 0.00 0.00 0.00 0.00 0.00 00:32:10.830 00:32:11.765 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:11.765 Nvme0n1 : 7.00 11453.86 44.74 0.00 0.00 0.00 0.00 0.00 00:32:11.765 [2024-11-06T14:36:39.400Z] =================================================================================================================== 00:32:11.765 [2024-11-06T14:36:39.400Z] Total : 11453.86 44.74 0.00 0.00 0.00 0.00 0.00 00:32:11.765 00:32:12.698 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:12.698 Nvme0n1 : 8.00 11451.12 44.73 0.00 0.00 0.00 0.00 0.00 00:32:12.698 [2024-11-06T14:36:40.333Z] =================================================================================================================== 00:32:12.698 [2024-11-06T14:36:40.333Z] Total : 11451.12 44.73 0.00 0.00 0.00 0.00 0.00 00:32:12.698 00:32:14.071 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:14.071 Nvme0n1 : 9.00 11478.11 44.84 0.00 0.00 0.00 0.00 0.00 00:32:14.071 [2024-11-06T14:36:41.706Z] =================================================================================================================== 00:32:14.071 [2024-11-06T14:36:41.706Z] Total : 11478.11 44.84 0.00 0.00 0.00 0.00 0.00 00:32:14.071 00:32:15.007 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:15.007 Nvme0n1 : 10.00 11491.40 44.89 0.00 0.00 0.00 0.00 0.00 00:32:15.007 [2024-11-06T14:36:42.642Z] =================================================================================================================== 00:32:15.007 [2024-11-06T14:36:42.642Z] Total : 11491.40 44.89 0.00 0.00 0.00 0.00 0.00 00:32:15.007 00:32:15.007 00:32:15.007 Latency(us) 00:32:15.007 [2024-11-06T14:36:42.642Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:15.007 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:15.007 Nvme0n1 : 10.01 11494.56 44.90 0.00 0.00 11131.74 4637.93 29037.49 00:32:15.007 [2024-11-06T14:36:42.642Z] =================================================================================================================== 00:32:15.007 [2024-11-06T14:36:42.642Z] Total : 11494.56 44.90 0.00 0.00 11131.74 4637.93 29037.49 00:32:15.007 { 00:32:15.007 "results": [ 00:32:15.007 { 00:32:15.007 "job": "Nvme0n1", 00:32:15.007 "core_mask": "0x2", 00:32:15.007 "workload": "randwrite", 00:32:15.007 "status": "finished", 00:32:15.007 "queue_depth": 128, 00:32:15.007 "io_size": 4096, 00:32:15.007 "runtime": 10.008383, 00:32:15.007 "iops": 11494.564106909178, 00:32:15.007 "mibps": 44.90064104261398, 00:32:15.007 "io_failed": 0, 00:32:15.007 "io_timeout": 0, 00:32:15.007 "avg_latency_us": 11131.743447490082, 00:32:15.007 "min_latency_us": 4637.932307692307, 00:32:15.007 "max_latency_us": 29037.48923076923 00:32:15.007 } 00:32:15.007 ], 00:32:15.007 "core_count": 1 00:32:15.007 } 00:32:15.007 14:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 112015 00:32:15.007 14:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 112015 ']' 00:32:15.007 14:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 112015 00:32:15.007 14:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:32:15.007 14:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:15.007 14:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 112015 00:32:15.007 14:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:32:15.007 14:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:32:15.007 killing process with pid 112015 00:32:15.007 14:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 112015' 00:32:15.007 Received shutdown signal, test time was about 10.000000 seconds 00:32:15.007 00:32:15.007 Latency(us) 00:32:15.007 [2024-11-06T14:36:42.642Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:15.007 [2024-11-06T14:36:42.642Z] =================================================================================================================== 00:32:15.007 [2024-11-06T14:36:42.642Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:15.007 14:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 112015 00:32:15.007 14:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 112015 00:32:15.577 14:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:32:15.577 14:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:15.840 14:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f5da63b3-e90a-4205-be57-a44639827627 00:32:15.840 14:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:32:16.101 14:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:32:16.101 14:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:32:16.101 14:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:16.363 [2024-11-06 14:36:43.767879] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:32:16.363 14:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f5da63b3-e90a-4205-be57-a44639827627 00:32:16.363 14:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:32:16.363 14:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f5da63b3-e90a-4205-be57-a44639827627 00:32:16.363 14:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:16.363 14:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:16.363 14:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:16.363 14:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:16.363 14:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:16.363 14:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:16.363 14:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:16.363 14:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:32:16.363 14:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f5da63b3-e90a-4205-be57-a44639827627 00:32:16.624 2024/11/06 14:36:44 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:f5da63b3-e90a-4205-be57-a44639827627], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:32:16.624 request: 00:32:16.624 { 00:32:16.624 "method": "bdev_lvol_get_lvstores", 00:32:16.624 "params": { 00:32:16.624 "uuid": "f5da63b3-e90a-4205-be57-a44639827627" 00:32:16.624 } 00:32:16.624 } 00:32:16.624 Got JSON-RPC error response 00:32:16.624 GoRPCClient: error on JSON-RPC call 00:32:16.624 14:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:32:16.624 14:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:16.624 14:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:16.624 14:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:16.624 14:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:16.624 aio_bdev 00:32:16.624 14:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 0c0741ed-ee70-41fe-83f5-f53458ab3185 00:32:16.624 14:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=0c0741ed-ee70-41fe-83f5-f53458ab3185 00:32:16.624 14:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:32:16.624 14:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:32:16.624 14:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:32:16.624 14:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:32:16.624 14:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:16.883 14:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0c0741ed-ee70-41fe-83f5-f53458ab3185 -t 2000 00:32:17.144 [ 00:32:17.144 { 00:32:17.144 "aliases": [ 00:32:17.144 "lvs/lvol" 00:32:17.144 ], 00:32:17.144 "assigned_rate_limits": { 00:32:17.144 "r_mbytes_per_sec": 0, 00:32:17.144 "rw_ios_per_sec": 0, 00:32:17.144 "rw_mbytes_per_sec": 0, 00:32:17.144 "w_mbytes_per_sec": 0 00:32:17.144 }, 00:32:17.144 "block_size": 4096, 00:32:17.144 "claimed": false, 00:32:17.144 "driver_specific": { 00:32:17.144 "lvol": { 00:32:17.144 "base_bdev": "aio_bdev", 00:32:17.144 "clone": false, 00:32:17.144 "esnap_clone": false, 00:32:17.144 "lvol_store_uuid": "f5da63b3-e90a-4205-be57-a44639827627", 00:32:17.144 "num_allocated_clusters": 38, 00:32:17.144 "snapshot": false, 00:32:17.144 "thin_provision": false 00:32:17.144 } 00:32:17.144 }, 00:32:17.144 "name": "0c0741ed-ee70-41fe-83f5-f53458ab3185", 00:32:17.144 "num_blocks": 38912, 00:32:17.144 "product_name": "Logical Volume", 00:32:17.144 "supported_io_types": { 00:32:17.144 "abort": false, 00:32:17.144 "compare": false, 00:32:17.144 "compare_and_write": false, 00:32:17.144 "copy": false, 00:32:17.144 "flush": false, 00:32:17.144 "get_zone_info": false, 00:32:17.144 "nvme_admin": false, 00:32:17.144 "nvme_io": false, 00:32:17.144 "nvme_io_md": false, 00:32:17.144 "nvme_iov_md": false, 00:32:17.144 "read": true, 00:32:17.144 "reset": true, 00:32:17.144 "seek_data": true, 00:32:17.144 "seek_hole": true, 00:32:17.144 "unmap": true, 00:32:17.144 "write": true, 00:32:17.144 "write_zeroes": true, 00:32:17.144 "zcopy": false, 00:32:17.144 "zone_append": false, 00:32:17.144 "zone_management": false 00:32:17.144 }, 00:32:17.144 "uuid": "0c0741ed-ee70-41fe-83f5-f53458ab3185", 00:32:17.144 "zoned": false 00:32:17.144 } 00:32:17.144 ] 00:32:17.144 14:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:32:17.144 14:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:32:17.144 14:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f5da63b3-e90a-4205-be57-a44639827627 00:32:17.406 14:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:32:17.406 14:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f5da63b3-e90a-4205-be57-a44639827627 00:32:17.406 14:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:32:17.666 14:36:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:32:17.666 14:36:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 0c0741ed-ee70-41fe-83f5-f53458ab3185 00:32:17.666 14:36:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f5da63b3-e90a-4205-be57-a44639827627 00:32:17.924 14:36:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:18.182 14:36:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:32:18.444 ************************************ 00:32:18.444 END TEST lvs_grow_clean 00:32:18.444 ************************************ 00:32:18.444 00:32:18.444 real 0m17.467s 00:32:18.444 user 0m16.694s 00:32:18.444 sys 0m1.925s 00:32:18.444 14:36:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:18.444 14:36:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:32:18.444 14:36:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:32:18.444 14:36:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:32:18.444 14:36:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:18.444 14:36:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:18.444 ************************************ 00:32:18.444 START TEST lvs_grow_dirty 00:32:18.444 ************************************ 00:32:18.444 14:36:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:32:18.444 14:36:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:32:18.444 14:36:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:32:18.444 14:36:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:32:18.444 14:36:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:32:18.444 14:36:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:32:18.444 14:36:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:32:18.444 14:36:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:32:18.444 14:36:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:32:18.444 14:36:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:18.704 14:36:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:32:18.704 14:36:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:32:18.964 14:36:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=f04d2152-9881-4990-ab8b-2ee3ac888d5a 00:32:18.964 14:36:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f04d2152-9881-4990-ab8b-2ee3ac888d5a 00:32:18.964 14:36:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:32:19.226 14:36:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:32:19.226 14:36:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:32:19.226 14:36:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u f04d2152-9881-4990-ab8b-2ee3ac888d5a lvol 150 00:32:19.487 14:36:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=0b6b4386-7566-4170-b3df-0df186256aa9 00:32:19.487 14:36:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:32:19.487 14:36:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:32:19.487 [2024-11-06 14:36:47.111844] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:32:19.487 [2024-11-06 14:36:47.112044] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:32:19.487 true 00:32:19.747 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:32:19.747 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f04d2152-9881-4990-ab8b-2ee3ac888d5a 00:32:19.747 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:32:19.747 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:20.008 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0b6b4386-7566-4170-b3df-0df186256aa9 00:32:20.269 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:32:20.269 [2024-11-06 14:36:47.900429] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:32:20.534 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:32:20.534 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=112439 00:32:20.534 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:20.534 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 112439 /var/tmp/bdevperf.sock 00:32:20.534 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:32:20.534 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 112439 ']' 00:32:20.534 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:20.534 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:20.534 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:20.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:20.534 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:20.534 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:20.795 [2024-11-06 14:36:48.226845] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:32:20.795 [2024-11-06 14:36:48.227014] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112439 ] 00:32:20.795 [2024-11-06 14:36:48.388922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:21.056 [2024-11-06 14:36:48.519436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:21.628 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:21.628 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:32:21.628 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:32:21.889 Nvme0n1 00:32:21.889 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:32:22.149 [ 00:32:22.149 { 00:32:22.149 "aliases": [ 00:32:22.149 "0b6b4386-7566-4170-b3df-0df186256aa9" 00:32:22.149 ], 00:32:22.149 "assigned_rate_limits": { 00:32:22.150 "r_mbytes_per_sec": 0, 00:32:22.150 "rw_ios_per_sec": 0, 00:32:22.150 "rw_mbytes_per_sec": 0, 00:32:22.150 "w_mbytes_per_sec": 0 00:32:22.150 }, 00:32:22.150 "block_size": 4096, 00:32:22.150 "claimed": false, 00:32:22.150 "driver_specific": { 00:32:22.150 "mp_policy": "active_passive", 00:32:22.150 "nvme": [ 00:32:22.150 { 00:32:22.150 "ctrlr_data": { 00:32:22.150 "ana_reporting": false, 00:32:22.150 "cntlid": 1, 00:32:22.150 "firmware_revision": "25.01", 00:32:22.150 "model_number": "SPDK bdev Controller", 00:32:22.150 "multi_ctrlr": true, 00:32:22.150 "oacs": { 00:32:22.150 "firmware": 0, 00:32:22.150 "format": 0, 00:32:22.150 "ns_manage": 0, 00:32:22.150 "security": 0 00:32:22.150 }, 00:32:22.150 "serial_number": "SPDK0", 00:32:22.150 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:22.150 "vendor_id": "0x8086" 00:32:22.150 }, 00:32:22.150 "ns_data": { 00:32:22.150 "can_share": true, 00:32:22.150 "id": 1 00:32:22.150 }, 00:32:22.150 "trid": { 00:32:22.150 "adrfam": "IPv4", 00:32:22.150 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:22.150 "traddr": "10.0.0.3", 00:32:22.150 "trsvcid": "4420", 00:32:22.150 "trtype": "TCP" 00:32:22.150 }, 00:32:22.150 "vs": { 00:32:22.150 "nvme_version": "1.3" 00:32:22.150 } 00:32:22.150 } 00:32:22.150 ] 00:32:22.150 }, 00:32:22.150 "memory_domains": [ 00:32:22.150 { 00:32:22.150 "dma_device_id": "system", 00:32:22.150 "dma_device_type": 1 00:32:22.150 } 00:32:22.150 ], 00:32:22.150 "name": "Nvme0n1", 00:32:22.150 "num_blocks": 38912, 00:32:22.150 "numa_id": -1, 00:32:22.150 "product_name": "NVMe disk", 00:32:22.150 "supported_io_types": { 00:32:22.150 "abort": true, 00:32:22.150 "compare": true, 00:32:22.150 "compare_and_write": true, 00:32:22.150 "copy": true, 00:32:22.150 "flush": true, 00:32:22.150 "get_zone_info": false, 00:32:22.150 "nvme_admin": true, 00:32:22.150 "nvme_io": true, 00:32:22.150 "nvme_io_md": false, 00:32:22.150 "nvme_iov_md": false, 00:32:22.150 "read": true, 00:32:22.150 "reset": true, 00:32:22.150 "seek_data": false, 00:32:22.150 "seek_hole": false, 00:32:22.150 "unmap": true, 00:32:22.150 "write": true, 00:32:22.150 "write_zeroes": true, 00:32:22.150 "zcopy": false, 00:32:22.150 "zone_append": false, 00:32:22.150 "zone_management": false 00:32:22.150 }, 00:32:22.150 "uuid": "0b6b4386-7566-4170-b3df-0df186256aa9", 00:32:22.150 "zoned": false 00:32:22.150 } 00:32:22.150 ] 00:32:22.150 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=112485 00:32:22.150 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:22.150 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:32:22.150 Running I/O for 10 seconds... 00:32:23.091 Latency(us) 00:32:23.091 [2024-11-06T14:36:50.726Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:23.091 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:23.091 Nvme0n1 : 1.00 9702.00 37.90 0.00 0.00 0.00 0.00 0.00 00:32:23.091 [2024-11-06T14:36:50.726Z] =================================================================================================================== 00:32:23.091 [2024-11-06T14:36:50.726Z] Total : 9702.00 37.90 0.00 0.00 0.00 0.00 0.00 00:32:23.091 00:32:24.025 14:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f04d2152-9881-4990-ab8b-2ee3ac888d5a 00:32:24.283 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:24.283 Nvme0n1 : 2.00 9739.00 38.04 0.00 0.00 0.00 0.00 0.00 00:32:24.283 [2024-11-06T14:36:51.918Z] =================================================================================================================== 00:32:24.283 [2024-11-06T14:36:51.918Z] Total : 9739.00 38.04 0.00 0.00 0.00 0.00 0.00 00:32:24.283 00:32:24.283 true 00:32:24.283 14:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:32:24.283 14:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f04d2152-9881-4990-ab8b-2ee3ac888d5a 00:32:24.540 14:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:32:24.540 14:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:32:24.540 14:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 112485 00:32:25.104 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:25.104 Nvme0n1 : 3.00 9618.00 37.57 0.00 0.00 0.00 0.00 0.00 00:32:25.104 [2024-11-06T14:36:52.739Z] =================================================================================================================== 00:32:25.104 [2024-11-06T14:36:52.739Z] Total : 9618.00 37.57 0.00 0.00 0.00 0.00 0.00 00:32:25.104 00:32:26.478 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:26.478 Nvme0n1 : 4.00 9536.25 37.25 0.00 0.00 0.00 0.00 0.00 00:32:26.478 [2024-11-06T14:36:54.113Z] =================================================================================================================== 00:32:26.478 [2024-11-06T14:36:54.113Z] Total : 9536.25 37.25 0.00 0.00 0.00 0.00 0.00 00:32:26.478 00:32:27.412 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:27.412 Nvme0n1 : 5.00 9756.20 38.11 0.00 0.00 0.00 0.00 0.00 00:32:27.412 [2024-11-06T14:36:55.047Z] =================================================================================================================== 00:32:27.412 [2024-11-06T14:36:55.047Z] Total : 9756.20 38.11 0.00 0.00 0.00 0.00 0.00 00:32:27.412 00:32:28.346 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:28.346 Nvme0n1 : 6.00 10147.33 39.64 0.00 0.00 0.00 0.00 0.00 00:32:28.346 [2024-11-06T14:36:55.981Z] =================================================================================================================== 00:32:28.346 [2024-11-06T14:36:55.981Z] Total : 10147.33 39.64 0.00 0.00 0.00 0.00 0.00 00:32:28.346 00:32:29.280 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:29.280 Nvme0n1 : 7.00 10429.43 40.74 0.00 0.00 0.00 0.00 0.00 00:32:29.280 [2024-11-06T14:36:56.915Z] =================================================================================================================== 00:32:29.280 [2024-11-06T14:36:56.915Z] Total : 10429.43 40.74 0.00 0.00 0.00 0.00 0.00 00:32:29.280 00:32:30.213 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:30.213 Nvme0n1 : 8.00 10016.00 39.12 0.00 0.00 0.00 0.00 0.00 00:32:30.213 [2024-11-06T14:36:57.848Z] =================================================================================================================== 00:32:30.213 [2024-11-06T14:36:57.848Z] Total : 10016.00 39.12 0.00 0.00 0.00 0.00 0.00 00:32:30.213 00:32:31.148 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:31.148 Nvme0n1 : 9.00 10216.00 39.91 0.00 0.00 0.00 0.00 0.00 00:32:31.148 [2024-11-06T14:36:58.783Z] =================================================================================================================== 00:32:31.148 [2024-11-06T14:36:58.783Z] Total : 10216.00 39.91 0.00 0.00 0.00 0.00 0.00 00:32:31.148 00:32:32.081 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:32.082 Nvme0n1 : 10.00 10375.70 40.53 0.00 0.00 0.00 0.00 0.00 00:32:32.082 [2024-11-06T14:36:59.717Z] =================================================================================================================== 00:32:32.082 [2024-11-06T14:36:59.717Z] Total : 10375.70 40.53 0.00 0.00 0.00 0.00 0.00 00:32:32.082 00:32:32.082 00:32:32.082 Latency(us) 00:32:32.082 [2024-11-06T14:36:59.717Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:32.082 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:32.082 Nvme0n1 : 10.01 10380.60 40.55 0.00 0.00 12326.44 4839.58 322638.77 00:32:32.082 [2024-11-06T14:36:59.717Z] =================================================================================================================== 00:32:32.082 [2024-11-06T14:36:59.717Z] Total : 10380.60 40.55 0.00 0.00 12326.44 4839.58 322638.77 00:32:32.082 { 00:32:32.082 "results": [ 00:32:32.082 { 00:32:32.082 "job": "Nvme0n1", 00:32:32.082 "core_mask": "0x2", 00:32:32.082 "workload": "randwrite", 00:32:32.082 "status": "finished", 00:32:32.082 "queue_depth": 128, 00:32:32.082 "io_size": 4096, 00:32:32.082 "runtime": 10.007607, 00:32:32.082 "iops": 10380.603474936615, 00:32:32.082 "mibps": 40.549232323971154, 00:32:32.082 "io_failed": 0, 00:32:32.082 "io_timeout": 0, 00:32:32.082 "avg_latency_us": 12326.43656398162, 00:32:32.082 "min_latency_us": 4839.581538461539, 00:32:32.082 "max_latency_us": 322638.76923076925 00:32:32.082 } 00:32:32.082 ], 00:32:32.082 "core_count": 1 00:32:32.082 } 00:32:32.340 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 112439 00:32:32.340 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 112439 ']' 00:32:32.340 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 112439 00:32:32.340 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:32:32.340 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:32.340 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 112439 00:32:32.340 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:32:32.340 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:32:32.340 killing process with pid 112439 00:32:32.340 Received shutdown signal, test time was about 10.000000 seconds 00:32:32.340 00:32:32.340 Latency(us) 00:32:32.340 [2024-11-06T14:36:59.975Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:32.340 [2024-11-06T14:36:59.975Z] =================================================================================================================== 00:32:32.340 [2024-11-06T14:36:59.975Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:32.340 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 112439' 00:32:32.340 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 112439 00:32:32.340 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 112439 00:32:32.911 14:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:32:32.911 14:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:33.170 14:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:32:33.171 14:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f04d2152-9881-4990-ab8b-2ee3ac888d5a 00:32:33.430 14:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:32:33.430 14:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:32:33.430 14:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 111860 00:32:33.430 14:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 111860 00:32:33.430 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 111860 Killed "${NVMF_APP[@]}" "$@" 00:32:33.430 14:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:32:33.430 14:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:32:33.430 14:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:33.430 14:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:33.430 14:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:33.430 14:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=112661 00:32:33.430 14:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 112661 00:32:33.430 14:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 112661 ']' 00:32:33.430 14:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:33.430 14:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:33.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:33.430 14:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:33.430 14:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:32:33.430 14:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:33.430 14:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:33.430 [2024-11-06 14:37:00.967584] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:33.430 [2024-11-06 14:37:00.969152] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:32:33.430 [2024-11-06 14:37:00.969223] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:33.706 [2024-11-06 14:37:01.126526] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:33.706 [2024-11-06 14:37:01.217237] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:33.706 [2024-11-06 14:37:01.217279] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:33.706 [2024-11-06 14:37:01.217294] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:33.706 [2024-11-06 14:37:01.217302] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:33.706 [2024-11-06 14:37:01.217309] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:33.706 [2024-11-06 14:37:01.218007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:33.986 [2024-11-06 14:37:01.411675] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:33.986 [2024-11-06 14:37:01.411973] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:34.244 14:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:34.244 14:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:32:34.244 14:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:34.244 14:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:34.244 14:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:34.244 14:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:34.244 14:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:34.504 [2024-11-06 14:37:01.958127] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:32:34.504 [2024-11-06 14:37:01.959177] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:32:34.504 [2024-11-06 14:37:01.961083] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:32:34.504 14:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:32:34.504 14:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 0b6b4386-7566-4170-b3df-0df186256aa9 00:32:34.504 14:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=0b6b4386-7566-4170-b3df-0df186256aa9 00:32:34.504 14:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:32:34.504 14:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:32:34.504 14:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:32:34.504 14:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:32:34.504 14:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:34.766 14:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0b6b4386-7566-4170-b3df-0df186256aa9 -t 2000 00:32:35.026 [ 00:32:35.026 { 00:32:35.026 "aliases": [ 00:32:35.026 "lvs/lvol" 00:32:35.026 ], 00:32:35.026 "assigned_rate_limits": { 00:32:35.026 "r_mbytes_per_sec": 0, 00:32:35.026 "rw_ios_per_sec": 0, 00:32:35.026 "rw_mbytes_per_sec": 0, 00:32:35.026 "w_mbytes_per_sec": 0 00:32:35.026 }, 00:32:35.026 "block_size": 4096, 00:32:35.026 "claimed": false, 00:32:35.026 "driver_specific": { 00:32:35.026 "lvol": { 00:32:35.026 "base_bdev": "aio_bdev", 00:32:35.026 "clone": false, 00:32:35.026 "esnap_clone": false, 00:32:35.026 "lvol_store_uuid": "f04d2152-9881-4990-ab8b-2ee3ac888d5a", 00:32:35.026 "num_allocated_clusters": 38, 00:32:35.026 "snapshot": false, 00:32:35.026 "thin_provision": false 00:32:35.026 } 00:32:35.026 }, 00:32:35.026 "name": "0b6b4386-7566-4170-b3df-0df186256aa9", 00:32:35.026 "num_blocks": 38912, 00:32:35.026 "product_name": "Logical Volume", 00:32:35.026 "supported_io_types": { 00:32:35.026 "abort": false, 00:32:35.026 "compare": false, 00:32:35.026 "compare_and_write": false, 00:32:35.026 "copy": false, 00:32:35.026 "flush": false, 00:32:35.026 "get_zone_info": false, 00:32:35.026 "nvme_admin": false, 00:32:35.026 "nvme_io": false, 00:32:35.026 "nvme_io_md": false, 00:32:35.026 "nvme_iov_md": false, 00:32:35.026 "read": true, 00:32:35.026 "reset": true, 00:32:35.026 "seek_data": true, 00:32:35.026 "seek_hole": true, 00:32:35.026 "unmap": true, 00:32:35.026 "write": true, 00:32:35.026 "write_zeroes": true, 00:32:35.026 "zcopy": false, 00:32:35.026 "zone_append": false, 00:32:35.026 "zone_management": false 00:32:35.026 }, 00:32:35.026 "uuid": "0b6b4386-7566-4170-b3df-0df186256aa9", 00:32:35.026 "zoned": false 00:32:35.026 } 00:32:35.026 ] 00:32:35.026 14:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:32:35.026 14:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:32:35.026 14:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f04d2152-9881-4990-ab8b-2ee3ac888d5a 00:32:35.026 14:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:32:35.026 14:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f04d2152-9881-4990-ab8b-2ee3ac888d5a 00:32:35.026 14:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:32:35.286 14:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:32:35.286 14:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:35.546 [2024-11-06 14:37:03.026867] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:32:35.546 14:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f04d2152-9881-4990-ab8b-2ee3ac888d5a 00:32:35.546 14:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:32:35.546 14:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f04d2152-9881-4990-ab8b-2ee3ac888d5a 00:32:35.546 14:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:35.546 14:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:35.546 14:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:35.546 14:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:35.546 14:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:35.546 14:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:35.546 14:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:35.546 14:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:32:35.546 14:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f04d2152-9881-4990-ab8b-2ee3ac888d5a 00:32:35.807 2024/11/06 14:37:03 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:f04d2152-9881-4990-ab8b-2ee3ac888d5a], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:32:35.807 request: 00:32:35.807 { 00:32:35.807 "method": "bdev_lvol_get_lvstores", 00:32:35.807 "params": { 00:32:35.807 "uuid": "f04d2152-9881-4990-ab8b-2ee3ac888d5a" 00:32:35.807 } 00:32:35.807 } 00:32:35.807 Got JSON-RPC error response 00:32:35.807 GoRPCClient: error on JSON-RPC call 00:32:35.807 14:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:32:35.807 14:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:35.807 14:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:35.807 14:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:35.807 14:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:36.067 aio_bdev 00:32:36.067 14:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 0b6b4386-7566-4170-b3df-0df186256aa9 00:32:36.067 14:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=0b6b4386-7566-4170-b3df-0df186256aa9 00:32:36.067 14:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:32:36.067 14:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:32:36.067 14:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:32:36.067 14:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:32:36.068 14:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:36.326 14:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0b6b4386-7566-4170-b3df-0df186256aa9 -t 2000 00:32:36.326 [ 00:32:36.326 { 00:32:36.326 "aliases": [ 00:32:36.326 "lvs/lvol" 00:32:36.326 ], 00:32:36.326 "assigned_rate_limits": { 00:32:36.326 "r_mbytes_per_sec": 0, 00:32:36.326 "rw_ios_per_sec": 0, 00:32:36.326 "rw_mbytes_per_sec": 0, 00:32:36.326 "w_mbytes_per_sec": 0 00:32:36.326 }, 00:32:36.326 "block_size": 4096, 00:32:36.326 "claimed": false, 00:32:36.326 "driver_specific": { 00:32:36.326 "lvol": { 00:32:36.326 "base_bdev": "aio_bdev", 00:32:36.326 "clone": false, 00:32:36.326 "esnap_clone": false, 00:32:36.326 "lvol_store_uuid": "f04d2152-9881-4990-ab8b-2ee3ac888d5a", 00:32:36.326 "num_allocated_clusters": 38, 00:32:36.326 "snapshot": false, 00:32:36.326 "thin_provision": false 00:32:36.326 } 00:32:36.326 }, 00:32:36.326 "name": "0b6b4386-7566-4170-b3df-0df186256aa9", 00:32:36.326 "num_blocks": 38912, 00:32:36.326 "product_name": "Logical Volume", 00:32:36.326 "supported_io_types": { 00:32:36.326 "abort": false, 00:32:36.326 "compare": false, 00:32:36.326 "compare_and_write": false, 00:32:36.326 "copy": false, 00:32:36.326 "flush": false, 00:32:36.326 "get_zone_info": false, 00:32:36.326 "nvme_admin": false, 00:32:36.326 "nvme_io": false, 00:32:36.326 "nvme_io_md": false, 00:32:36.326 "nvme_iov_md": false, 00:32:36.326 "read": true, 00:32:36.326 "reset": true, 00:32:36.326 "seek_data": true, 00:32:36.326 "seek_hole": true, 00:32:36.326 "unmap": true, 00:32:36.326 "write": true, 00:32:36.326 "write_zeroes": true, 00:32:36.326 "zcopy": false, 00:32:36.326 "zone_append": false, 00:32:36.326 "zone_management": false 00:32:36.326 }, 00:32:36.326 "uuid": "0b6b4386-7566-4170-b3df-0df186256aa9", 00:32:36.326 "zoned": false 00:32:36.326 } 00:32:36.326 ] 00:32:36.326 14:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:32:36.326 14:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f04d2152-9881-4990-ab8b-2ee3ac888d5a 00:32:36.326 14:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:32:36.588 14:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:32:36.588 14:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:32:36.588 14:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f04d2152-9881-4990-ab8b-2ee3ac888d5a 00:32:36.849 14:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:32:36.849 14:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 0b6b4386-7566-4170-b3df-0df186256aa9 00:32:37.110 14:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f04d2152-9881-4990-ab8b-2ee3ac888d5a 00:32:37.370 14:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:37.629 14:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:32:38.202 ************************************ 00:32:38.202 END TEST lvs_grow_dirty 00:32:38.202 ************************************ 00:32:38.202 00:32:38.202 real 0m19.471s 00:32:38.202 user 0m28.153s 00:32:38.202 sys 0m6.222s 00:32:38.202 14:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:38.202 14:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:38.202 14:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:32:38.202 14:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:32:38.202 14:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:32:38.202 14:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:32:38.202 14:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:32:38.202 14:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:32:38.202 14:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:32:38.202 14:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:32:38.202 14:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:32:38.202 nvmf_trace.0 00:32:38.202 14:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:32:38.202 14:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:32:38.202 14:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:38.202 14:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:32:38.777 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:38.777 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:32:38.777 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:38.777 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:38.777 rmmod nvme_tcp 00:32:38.777 rmmod nvme_fabrics 00:32:38.777 rmmod nvme_keyring 00:32:38.777 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:38.777 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:32:38.777 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:32:38.777 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 112661 ']' 00:32:38.777 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 112661 00:32:38.777 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 112661 ']' 00:32:38.777 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 112661 00:32:38.777 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:32:38.777 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:38.777 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 112661 00:32:38.777 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:32:38.777 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:32:38.777 killing process with pid 112661 00:32:38.777 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 112661' 00:32:38.777 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 112661 00:32:38.777 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 112661 00:32:39.722 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:39.722 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:39.722 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:39.722 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:32:39.722 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:32:39.722 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:39.722 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:32:39.722 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:39.722 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:32:39.722 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:32:39.722 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:32:39.722 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:32:39.722 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:32:39.722 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:32:39.722 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:32:39.722 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:32:39.722 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:32:39.722 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:32:39.722 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:32:39.722 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:32:39.722 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:39.722 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:39.722 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:32:39.722 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:39.722 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:39.722 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:39.722 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:32:39.722 00:32:39.722 real 0m40.471s 00:32:39.722 user 0m46.762s 00:32:39.723 sys 0m9.223s 00:32:39.723 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:39.723 ************************************ 00:32:39.723 END TEST nvmf_lvs_grow 00:32:39.723 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:39.723 ************************************ 00:32:39.723 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:32:39.723 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:32:39.723 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:39.723 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:39.986 ************************************ 00:32:39.986 START TEST nvmf_bdev_io_wait 00:32:39.986 ************************************ 00:32:39.986 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:32:39.986 * Looking for test storage... 00:32:39.986 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:32:39.986 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:39.986 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:39.986 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:32:39.986 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:39.986 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:39.986 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:39.986 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:39.986 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:32:39.986 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:32:39.986 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:32:39.986 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:32:39.986 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:32:39.986 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:32:39.986 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:32:39.986 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:39.986 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:32:39.986 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:32:39.986 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:39.986 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:39.986 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:32:39.986 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:32:39.986 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:39.986 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:32:39.986 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:32:39.986 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:32:39.986 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:32:39.986 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:39.986 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:32:39.986 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:32:39.986 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:39.986 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:39.986 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:32:39.986 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:39.986 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:39.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:39.986 --rc genhtml_branch_coverage=1 00:32:39.986 --rc genhtml_function_coverage=1 00:32:39.986 --rc genhtml_legend=1 00:32:39.986 --rc geninfo_all_blocks=1 00:32:39.986 --rc geninfo_unexecuted_blocks=1 00:32:39.986 00:32:39.986 ' 00:32:39.986 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:39.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:39.986 --rc genhtml_branch_coverage=1 00:32:39.986 --rc genhtml_function_coverage=1 00:32:39.986 --rc genhtml_legend=1 00:32:39.986 --rc geninfo_all_blocks=1 00:32:39.986 --rc geninfo_unexecuted_blocks=1 00:32:39.986 00:32:39.986 ' 00:32:39.986 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:39.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:39.986 --rc genhtml_branch_coverage=1 00:32:39.986 --rc genhtml_function_coverage=1 00:32:39.986 --rc genhtml_legend=1 00:32:39.986 --rc geninfo_all_blocks=1 00:32:39.986 --rc geninfo_unexecuted_blocks=1 00:32:39.986 00:32:39.986 ' 00:32:39.986 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:39.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:39.986 --rc genhtml_branch_coverage=1 00:32:39.986 --rc genhtml_function_coverage=1 00:32:39.986 --rc genhtml_legend=1 00:32:39.986 --rc geninfo_all_blocks=1 00:32:39.986 --rc geninfo_unexecuted_blocks=1 00:32:39.986 00:32:39.986 ' 00:32:39.986 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:32:39.986 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:32:39.986 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:39.986 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:39.986 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:39.986 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:39.986 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:39.986 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:39.986 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:39.986 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:39.986 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:39.986 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:39.986 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:32:39.986 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:32:39.986 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:39.986 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:39.986 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:32:39.986 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:39.986 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:39.986 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:32:39.986 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:39.986 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:39.986 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:39.986 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.986 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.987 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.987 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:32:39.987 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.987 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:32:39.987 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:39.987 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:39.987 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:39.987 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:39.987 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:39.987 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:39.987 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:39.987 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:39.987 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:39.987 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:39.987 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:39.987 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:39.987 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:32:39.987 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:39.987 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:39.987 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:39.987 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:39.987 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:39.987 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:39.987 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:39.987 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:39.987 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:32:39.987 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:32:39.987 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:32:39.987 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:32:39.987 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:32:39.987 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:32:39.987 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:39.987 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:32:39.987 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:32:39.987 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:32:39.987 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:39.987 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:32:39.987 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:32:39.987 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:32:39.987 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:32:39.987 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:32:39.987 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:32:39.987 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:39.987 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:32:39.987 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:32:39.987 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:32:39.987 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:32:39.987 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:32:39.987 Cannot find device "nvmf_init_br" 00:32:39.987 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:32:39.987 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:32:39.987 Cannot find device "nvmf_init_br2" 00:32:39.987 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:32:39.987 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:32:39.987 Cannot find device "nvmf_tgt_br" 00:32:39.987 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:32:39.987 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:32:39.987 Cannot find device "nvmf_tgt_br2" 00:32:39.987 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:32:39.987 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:32:39.987 Cannot find device "nvmf_init_br" 00:32:39.987 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:32:39.987 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:32:39.987 Cannot find device "nvmf_init_br2" 00:32:39.987 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:32:39.987 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:32:40.248 Cannot find device "nvmf_tgt_br" 00:32:40.248 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:32:40.248 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:32:40.248 Cannot find device "nvmf_tgt_br2" 00:32:40.248 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:32:40.248 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:32:40.248 Cannot find device "nvmf_br" 00:32:40.248 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:32:40.248 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:32:40.248 Cannot find device "nvmf_init_if" 00:32:40.248 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:32:40.248 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:32:40.248 Cannot find device "nvmf_init_if2" 00:32:40.248 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:32:40.248 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:40.248 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:40.248 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:32:40.248 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:40.248 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:40.248 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:32:40.248 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:32:40.248 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:32:40.248 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:32:40.248 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:32:40.248 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:32:40.248 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:32:40.248 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:32:40.248 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:32:40.248 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:32:40.248 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:32:40.248 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:32:40.248 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:32:40.248 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:32:40.248 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:32:40.248 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:32:40.248 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:32:40.248 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:32:40.248 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:32:40.248 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:32:40.248 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:32:40.248 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:32:40.248 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:32:40.248 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:32:40.248 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:32:40.248 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:32:40.248 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:32:40.248 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:32:40.248 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:32:40.248 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:32:40.249 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:32:40.249 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:32:40.249 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:32:40.249 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:32:40.249 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:32:40.249 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:32:40.249 00:32:40.249 --- 10.0.0.3 ping statistics --- 00:32:40.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:40.249 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:32:40.249 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:32:40.249 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:32:40.249 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:32:40.249 00:32:40.249 --- 10.0.0.4 ping statistics --- 00:32:40.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:40.249 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:32:40.249 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:32:40.249 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:40.249 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:32:40.249 00:32:40.249 --- 10.0.0.1 ping statistics --- 00:32:40.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:40.249 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:32:40.249 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:32:40.249 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:40.249 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:32:40.249 00:32:40.249 --- 10.0.0.2 ping statistics --- 00:32:40.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:40.249 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:32:40.249 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:40.249 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:32:40.249 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:40.249 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:40.249 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:40.249 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:40.249 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:40.249 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:40.249 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:40.509 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:32:40.509 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:40.509 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:40.509 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:40.509 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=113130 00:32:40.509 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 113130 00:32:40.509 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 113130 ']' 00:32:40.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:40.509 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:32:40.509 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:40.509 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:40.509 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:40.509 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:40.509 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:40.509 [2024-11-06 14:37:07.975354] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:40.509 [2024-11-06 14:37:07.977628] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:32:40.509 [2024-11-06 14:37:07.977737] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:40.509 [2024-11-06 14:37:08.142777] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:40.771 [2024-11-06 14:37:08.270029] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:40.771 [2024-11-06 14:37:08.270093] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:40.771 [2024-11-06 14:37:08.270104] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:40.771 [2024-11-06 14:37:08.270116] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:40.771 [2024-11-06 14:37:08.270124] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:40.771 [2024-11-06 14:37:08.272085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:40.771 [2024-11-06 14:37:08.272473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:40.771 [2024-11-06 14:37:08.272691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:40.771 [2024-11-06 14:37:08.272820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:40.771 [2024-11-06 14:37:08.273484] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:41.344 14:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:41.344 14:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:32:41.344 14:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:41.344 14:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:41.344 14:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:41.344 14:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:41.344 14:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:32:41.344 14:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.344 14:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:41.344 14:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.344 14:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:32:41.344 14:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.344 14:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:41.605 [2024-11-06 14:37:09.073626] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:41.605 [2024-11-06 14:37:09.075190] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:41.605 [2024-11-06 14:37:09.075927] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:41.605 [2024-11-06 14:37:09.076037] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:41.605 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.605 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:41.605 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.605 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:41.605 [2024-11-06 14:37:09.085822] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:41.605 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.605 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:41.605 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.605 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:41.605 Malloc0 00:32:41.605 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.605 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:41.606 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.606 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:41.606 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.606 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:41.606 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.606 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:41.606 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.606 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:32:41.606 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.606 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:41.606 [2024-11-06 14:37:09.190106] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:32:41.606 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.606 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=113188 00:32:41.606 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:32:41.606 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=113190 00:32:41.606 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:32:41.606 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:41.606 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:41.606 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:41.606 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:41.606 { 00:32:41.606 "params": { 00:32:41.606 "name": "Nvme$subsystem", 00:32:41.606 "trtype": "$TEST_TRANSPORT", 00:32:41.606 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:41.606 "adrfam": "ipv4", 00:32:41.606 "trsvcid": "$NVMF_PORT", 00:32:41.606 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:41.606 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:41.606 "hdgst": ${hdgst:-false}, 00:32:41.606 "ddgst": ${ddgst:-false} 00:32:41.606 }, 00:32:41.606 "method": "bdev_nvme_attach_controller" 00:32:41.606 } 00:32:41.606 EOF 00:32:41.606 )") 00:32:41.606 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:32:41.606 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:32:41.606 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=113192 00:32:41.606 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:41.606 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:41.606 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:41.606 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:41.606 { 00:32:41.606 "params": { 00:32:41.606 "name": "Nvme$subsystem", 00:32:41.606 "trtype": "$TEST_TRANSPORT", 00:32:41.606 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:41.606 "adrfam": "ipv4", 00:32:41.606 "trsvcid": "$NVMF_PORT", 00:32:41.606 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:41.606 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:41.606 "hdgst": ${hdgst:-false}, 00:32:41.606 "ddgst": ${ddgst:-false} 00:32:41.606 }, 00:32:41.606 "method": "bdev_nvme_attach_controller" 00:32:41.606 } 00:32:41.606 EOF 00:32:41.606 )") 00:32:41.606 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=113195 00:32:41.606 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:32:41.606 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:41.606 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:32:41.606 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:41.606 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:41.606 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:32:41.606 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:32:41.606 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:41.606 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:41.606 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:41.606 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:41.606 { 00:32:41.606 "params": { 00:32:41.606 "name": "Nvme$subsystem", 00:32:41.606 "trtype": "$TEST_TRANSPORT", 00:32:41.606 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:41.606 "adrfam": "ipv4", 00:32:41.606 "trsvcid": "$NVMF_PORT", 00:32:41.606 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:41.606 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:41.606 "hdgst": ${hdgst:-false}, 00:32:41.606 "ddgst": ${ddgst:-false} 00:32:41.606 }, 00:32:41.606 "method": "bdev_nvme_attach_controller" 00:32:41.606 } 00:32:41.606 EOF 00:32:41.606 )") 00:32:41.606 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:41.606 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:41.606 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:41.606 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:32:41.606 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:41.606 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:41.606 "params": { 00:32:41.606 "name": "Nvme1", 00:32:41.606 "trtype": "tcp", 00:32:41.606 "traddr": "10.0.0.3", 00:32:41.606 "adrfam": "ipv4", 00:32:41.606 "trsvcid": "4420", 00:32:41.606 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:41.606 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:41.606 "hdgst": false, 00:32:41.606 "ddgst": false 00:32:41.606 }, 00:32:41.606 "method": "bdev_nvme_attach_controller" 00:32:41.606 }' 00:32:41.606 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:41.606 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:41.606 "params": { 00:32:41.606 "name": "Nvme1", 00:32:41.606 "trtype": "tcp", 00:32:41.606 "traddr": "10.0.0.3", 00:32:41.606 "adrfam": "ipv4", 00:32:41.606 "trsvcid": "4420", 00:32:41.606 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:41.606 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:41.606 "hdgst": false, 00:32:41.606 "ddgst": false 00:32:41.606 }, 00:32:41.606 "method": "bdev_nvme_attach_controller" 00:32:41.606 }' 00:32:41.606 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:41.606 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:41.606 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:41.606 { 00:32:41.606 "params": { 00:32:41.606 "name": "Nvme$subsystem", 00:32:41.606 "trtype": "$TEST_TRANSPORT", 00:32:41.606 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:41.606 "adrfam": "ipv4", 00:32:41.606 "trsvcid": "$NVMF_PORT", 00:32:41.606 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:41.606 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:41.606 "hdgst": ${hdgst:-false}, 00:32:41.606 "ddgst": ${ddgst:-false} 00:32:41.606 }, 00:32:41.606 "method": "bdev_nvme_attach_controller" 00:32:41.606 } 00:32:41.606 EOF 00:32:41.606 )") 00:32:41.606 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:41.606 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:41.606 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:41.606 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:41.606 "params": { 00:32:41.606 "name": "Nvme1", 00:32:41.606 "trtype": "tcp", 00:32:41.606 "traddr": "10.0.0.3", 00:32:41.606 "adrfam": "ipv4", 00:32:41.606 "trsvcid": "4420", 00:32:41.606 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:41.606 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:41.606 "hdgst": false, 00:32:41.606 "ddgst": false 00:32:41.606 }, 00:32:41.606 "method": "bdev_nvme_attach_controller" 00:32:41.606 }' 00:32:41.606 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:41.606 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:41.606 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:41.606 "params": { 00:32:41.607 "name": "Nvme1", 00:32:41.607 "trtype": "tcp", 00:32:41.607 "traddr": "10.0.0.3", 00:32:41.607 "adrfam": "ipv4", 00:32:41.607 "trsvcid": "4420", 00:32:41.607 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:41.607 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:41.607 "hdgst": false, 00:32:41.607 "ddgst": false 00:32:41.607 }, 00:32:41.607 "method": "bdev_nvme_attach_controller" 00:32:41.607 }' 00:32:41.868 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 113188 00:32:41.868 [2024-11-06 14:37:09.293798] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:32:41.868 [2024-11-06 14:37:09.293944] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:32:41.868 [2024-11-06 14:37:09.297216] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:32:41.868 [2024-11-06 14:37:09.297355] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:32:41.868 [2024-11-06 14:37:09.297599] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:32:41.868 [2024-11-06 14:37:09.297790] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:32:41.868 [2024-11-06 14:37:09.300273] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:32:41.868 [2024-11-06 14:37:09.300405] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:32:42.136 [2024-11-06 14:37:09.563873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:42.136 [2024-11-06 14:37:09.635498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:42.136 [2024-11-06 14:37:09.695218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:42.136 [2024-11-06 14:37:09.710983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:42.136 [2024-11-06 14:37:09.761269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:32:42.394 [2024-11-06 14:37:09.787059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:42.394 [2024-11-06 14:37:09.836140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:32:42.394 [2024-11-06 14:37:09.909626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:32:42.394 Running I/O for 1 seconds... 00:32:42.652 Running I/O for 1 seconds... 00:32:42.652 Running I/O for 1 seconds... 00:32:42.652 Running I/O for 1 seconds... 00:32:43.586 7071.00 IOPS, 27.62 MiB/s 00:32:43.586 Latency(us) 00:32:43.586 [2024-11-06T14:37:11.221Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:43.586 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:32:43.586 Nvme1n1 : 1.02 7103.91 27.75 0.00 0.00 17909.95 5620.97 29440.79 00:32:43.586 [2024-11-06T14:37:11.221Z] =================================================================================================================== 00:32:43.586 [2024-11-06T14:37:11.221Z] Total : 7103.91 27.75 0.00 0.00 17909.95 5620.97 29440.79 00:32:43.586 182760.00 IOPS, 713.91 MiB/s 00:32:43.586 Latency(us) 00:32:43.586 [2024-11-06T14:37:11.221Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:43.586 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:32:43.586 Nvme1n1 : 1.00 182406.99 712.53 0.00 0.00 698.17 338.71 1966.08 00:32:43.586 [2024-11-06T14:37:11.221Z] =================================================================================================================== 00:32:43.586 [2024-11-06T14:37:11.221Z] Total : 182406.99 712.53 0.00 0.00 698.17 338.71 1966.08 00:32:43.586 10811.00 IOPS, 42.23 MiB/s 00:32:43.586 Latency(us) 00:32:43.586 [2024-11-06T14:37:11.221Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:43.586 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:32:43.586 Nvme1n1 : 1.01 10856.13 42.41 0.00 0.00 11742.01 4688.34 17140.18 00:32:43.586 [2024-11-06T14:37:11.221Z] =================================================================================================================== 00:32:43.586 [2024-11-06T14:37:11.221Z] Total : 10856.13 42.41 0.00 0.00 11742.01 4688.34 17140.18 00:32:43.586 7144.00 IOPS, 27.91 MiB/s 00:32:43.586 Latency(us) 00:32:43.586 [2024-11-06T14:37:11.221Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:43.586 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:32:43.586 Nvme1n1 : 1.01 7277.02 28.43 0.00 0.00 17541.96 3780.92 39724.90 00:32:43.586 [2024-11-06T14:37:11.221Z] =================================================================================================================== 00:32:43.586 [2024-11-06T14:37:11.221Z] Total : 7277.02 28.43 0.00 0.00 17541.96 3780.92 39724.90 00:32:43.845 14:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 113190 00:32:44.103 14:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 113192 00:32:44.103 14:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 113195 00:32:44.103 14:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:44.103 14:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.103 14:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:44.104 14:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.104 14:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:32:44.104 14:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:32:44.104 14:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:44.104 14:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:32:44.104 14:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:44.104 14:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:32:44.104 14:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:44.104 14:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:44.104 rmmod nvme_tcp 00:32:44.104 rmmod nvme_fabrics 00:32:44.104 rmmod nvme_keyring 00:32:44.104 14:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:44.104 14:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:32:44.104 14:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:32:44.104 14:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 113130 ']' 00:32:44.104 14:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 113130 00:32:44.104 14:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 113130 ']' 00:32:44.104 14:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 113130 00:32:44.104 14:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:32:44.104 14:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:44.104 14:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 113130 00:32:44.364 14:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:32:44.364 14:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:32:44.364 killing process with pid 113130 00:32:44.364 14:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 113130' 00:32:44.364 14:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 113130 00:32:44.364 14:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 113130 00:32:44.934 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:44.934 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:44.934 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:44.934 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:32:44.935 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:44.935 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:32:44.935 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:32:44.935 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:44.935 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:32:44.935 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:32:44.935 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:32:44.935 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:32:44.935 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:32:44.935 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:32:44.935 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:32:44.935 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:32:44.935 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:32:44.935 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:32:44.935 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:32:44.935 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:32:44.935 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:44.935 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:44.935 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:32:44.935 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:44.935 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:44.935 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:44.935 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:32:44.935 00:32:44.935 real 0m5.207s 00:32:44.935 user 0m17.956s 00:32:44.935 sys 0m2.565s 00:32:44.935 ************************************ 00:32:44.935 END TEST nvmf_bdev_io_wait 00:32:44.935 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:44.935 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:44.935 ************************************ 00:32:45.200 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:32:45.200 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:32:45.200 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:45.200 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:45.200 ************************************ 00:32:45.200 START TEST nvmf_queue_depth 00:32:45.200 ************************************ 00:32:45.200 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:32:45.200 * Looking for test storage... 00:32:45.200 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:32:45.200 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:45.200 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:45.200 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:32:45.200 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:45.200 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:45.200 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:45.200 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:45.200 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:32:45.200 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:32:45.200 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:32:45.200 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:32:45.200 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:32:45.200 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:32:45.200 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:32:45.200 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:45.200 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:32:45.200 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:32:45.200 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:45.200 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:45.200 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:32:45.200 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:32:45.200 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:45.200 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:32:45.200 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:32:45.200 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:32:45.200 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:32:45.200 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:45.200 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:32:45.200 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:32:45.200 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:45.200 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:45.200 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:32:45.200 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:45.200 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:45.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:45.200 --rc genhtml_branch_coverage=1 00:32:45.200 --rc genhtml_function_coverage=1 00:32:45.200 --rc genhtml_legend=1 00:32:45.200 --rc geninfo_all_blocks=1 00:32:45.200 --rc geninfo_unexecuted_blocks=1 00:32:45.200 00:32:45.200 ' 00:32:45.200 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:45.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:45.200 --rc genhtml_branch_coverage=1 00:32:45.200 --rc genhtml_function_coverage=1 00:32:45.200 --rc genhtml_legend=1 00:32:45.200 --rc geninfo_all_blocks=1 00:32:45.200 --rc geninfo_unexecuted_blocks=1 00:32:45.200 00:32:45.200 ' 00:32:45.200 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:45.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:45.200 --rc genhtml_branch_coverage=1 00:32:45.200 --rc genhtml_function_coverage=1 00:32:45.200 --rc genhtml_legend=1 00:32:45.200 --rc geninfo_all_blocks=1 00:32:45.200 --rc geninfo_unexecuted_blocks=1 00:32:45.200 00:32:45.200 ' 00:32:45.200 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:45.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:45.200 --rc genhtml_branch_coverage=1 00:32:45.200 --rc genhtml_function_coverage=1 00:32:45.200 --rc genhtml_legend=1 00:32:45.200 --rc geninfo_all_blocks=1 00:32:45.200 --rc geninfo_unexecuted_blocks=1 00:32:45.200 00:32:45.200 ' 00:32:45.200 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:32:45.200 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:32:45.200 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:45.200 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:45.200 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:45.201 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:45.201 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:45.201 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:45.201 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:45.201 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:45.201 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:45.201 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:45.201 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:32:45.201 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:32:45.201 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:45.201 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:45.201 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:32:45.201 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:45.201 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:45.201 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:32:45.201 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:45.201 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:45.201 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:45.201 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.201 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.201 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.201 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:32:45.201 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.201 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:32:45.201 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:45.201 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:45.201 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:45.201 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:45.201 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:45.201 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:45.201 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:45.201 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:45.201 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:45.201 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:45.201 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:32:45.201 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:32:45.201 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:45.201 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:32:45.201 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:45.201 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:45.201 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:45.201 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:45.201 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:45.201 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:45.201 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:45.201 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:45.201 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:32:45.201 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:32:45.201 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:32:45.201 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:32:45.201 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:32:45.201 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:32:45.201 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:45.201 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:32:45.201 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:32:45.201 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:32:45.201 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:45.201 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:32:45.201 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:32:45.201 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:32:45.201 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:32:45.201 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:32:45.201 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:32:45.201 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:45.201 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:32:45.201 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:32:45.201 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:32:45.201 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:32:45.201 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:32:45.201 Cannot find device "nvmf_init_br" 00:32:45.201 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:32:45.201 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:32:45.201 Cannot find device "nvmf_init_br2" 00:32:45.201 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:32:45.201 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:32:45.201 Cannot find device "nvmf_tgt_br" 00:32:45.201 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:32:45.201 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:32:45.201 Cannot find device "nvmf_tgt_br2" 00:32:45.201 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:32:45.201 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:32:45.201 Cannot find device "nvmf_init_br" 00:32:45.201 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:32:45.201 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:32:45.201 Cannot find device "nvmf_init_br2" 00:32:45.461 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:32:45.461 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:32:45.461 Cannot find device "nvmf_tgt_br" 00:32:45.461 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:32:45.461 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:32:45.461 Cannot find device "nvmf_tgt_br2" 00:32:45.461 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:32:45.461 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:32:45.461 Cannot find device "nvmf_br" 00:32:45.461 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:32:45.461 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:32:45.461 Cannot find device "nvmf_init_if" 00:32:45.461 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:32:45.461 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:32:45.461 Cannot find device "nvmf_init_if2" 00:32:45.461 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:32:45.461 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:45.461 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:45.461 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:32:45.461 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:45.461 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:45.461 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:32:45.461 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:32:45.461 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:32:45.461 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:32:45.461 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:32:45.461 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:32:45.461 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:32:45.461 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:32:45.461 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:32:45.461 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:32:45.461 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:32:45.461 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:32:45.461 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:32:45.461 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:32:45.461 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:32:45.461 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:32:45.461 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:32:45.461 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:32:45.461 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:32:45.461 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:32:45.461 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:32:45.461 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:32:45.461 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:32:45.461 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:32:45.461 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:32:45.461 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:32:45.461 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:32:45.461 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:32:45.461 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:32:45.461 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:32:45.461 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:32:45.461 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:32:45.461 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:32:45.461 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:32:45.721 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:32:45.721 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:32:45.721 00:32:45.721 --- 10.0.0.3 ping statistics --- 00:32:45.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:45.721 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:32:45.721 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:32:45.721 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:32:45.721 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:32:45.721 00:32:45.721 --- 10.0.0.4 ping statistics --- 00:32:45.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:45.721 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:32:45.721 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:32:45.721 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:45.721 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:32:45.721 00:32:45.721 --- 10.0.0.1 ping statistics --- 00:32:45.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:45.721 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:32:45.721 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:32:45.721 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:45.721 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:32:45.721 00:32:45.721 --- 10.0.0.2 ping statistics --- 00:32:45.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:45.721 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:32:45.721 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:45.721 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:32:45.721 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:45.721 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:45.721 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:45.721 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:45.721 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:45.721 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:45.721 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:45.721 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:32:45.721 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:45.721 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:45.721 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:45.721 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=113488 00:32:45.721 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 113488 00:32:45.721 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 113488 ']' 00:32:45.721 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:45.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:45.721 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:45.721 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:45.721 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:32:45.721 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:45.721 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:45.721 [2024-11-06 14:37:13.219034] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:45.721 [2024-11-06 14:37:13.221018] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:32:45.721 [2024-11-06 14:37:13.221108] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:45.980 [2024-11-06 14:37:13.383464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:45.980 [2024-11-06 14:37:13.505394] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:45.980 [2024-11-06 14:37:13.505465] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:45.980 [2024-11-06 14:37:13.505485] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:45.980 [2024-11-06 14:37:13.505496] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:45.980 [2024-11-06 14:37:13.505505] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:45.980 [2024-11-06 14:37:13.506459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:46.240 [2024-11-06 14:37:13.765265] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:46.240 [2024-11-06 14:37:13.765649] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:46.499 14:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:46.499 14:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:32:46.499 14:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:46.499 14:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:46.499 14:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:46.499 14:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:46.499 14:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:46.499 14:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:46.499 14:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:46.759 [2024-11-06 14:37:14.143528] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:46.760 14:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:46.760 14:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:46.760 14:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:46.760 14:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:46.760 Malloc0 00:32:46.760 14:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:46.760 14:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:46.760 14:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:46.760 14:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:46.760 14:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:46.760 14:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:46.760 14:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:46.760 14:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:46.760 14:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:46.760 14:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:32:46.760 14:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:46.760 14:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:46.760 [2024-11-06 14:37:14.247549] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:32:46.760 14:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:46.760 14:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=113538 00:32:46.760 14:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:46.760 14:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 113538 /var/tmp/bdevperf.sock 00:32:46.760 14:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:32:46.760 14:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 113538 ']' 00:32:46.760 14:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:46.760 14:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:46.760 14:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:46.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:46.760 14:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:46.760 14:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:46.760 [2024-11-06 14:37:14.331979] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:32:46.760 [2024-11-06 14:37:14.332281] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113538 ] 00:32:47.020 [2024-11-06 14:37:14.495262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:47.020 [2024-11-06 14:37:14.642258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:47.593 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:47.593 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:32:47.593 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:47.593 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.593 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:47.853 NVMe0n1 00:32:47.853 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:47.853 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:47.853 Running I/O for 10 seconds... 00:32:50.174 7182.00 IOPS, 28.05 MiB/s [2024-11-06T14:37:18.743Z] 7946.50 IOPS, 31.04 MiB/s [2024-11-06T14:37:19.678Z] 8881.33 IOPS, 34.69 MiB/s [2024-11-06T14:37:20.611Z] 9436.75 IOPS, 36.86 MiB/s [2024-11-06T14:37:21.544Z] 9723.60 IOPS, 37.98 MiB/s [2024-11-06T14:37:22.478Z] 10009.33 IOPS, 39.10 MiB/s [2024-11-06T14:37:23.412Z] 10207.57 IOPS, 39.87 MiB/s [2024-11-06T14:37:24.787Z] 10333.12 IOPS, 40.36 MiB/s [2024-11-06T14:37:25.722Z] 10444.67 IOPS, 40.80 MiB/s [2024-11-06T14:37:25.722Z] 10552.30 IOPS, 41.22 MiB/s 00:32:58.087 Latency(us) 00:32:58.087 [2024-11-06T14:37:25.722Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:58.087 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:32:58.087 Verification LBA range: start 0x0 length 0x4000 00:32:58.087 NVMe0n1 : 10.06 10587.26 41.36 0.00 0.00 96362.67 23592.96 82676.18 00:32:58.087 [2024-11-06T14:37:25.722Z] =================================================================================================================== 00:32:58.087 [2024-11-06T14:37:25.722Z] Total : 10587.26 41.36 0.00 0.00 96362.67 23592.96 82676.18 00:32:58.087 { 00:32:58.087 "results": [ 00:32:58.087 { 00:32:58.087 "job": "NVMe0n1", 00:32:58.087 "core_mask": "0x1", 00:32:58.087 "workload": "verify", 00:32:58.087 "status": "finished", 00:32:58.087 "verify_range": { 00:32:58.087 "start": 0, 00:32:58.087 "length": 16384 00:32:58.087 }, 00:32:58.087 "queue_depth": 1024, 00:32:58.087 "io_size": 4096, 00:32:58.087 "runtime": 10.060299, 00:32:58.087 "iops": 10587.2598816397, 00:32:58.087 "mibps": 41.35648391265508, 00:32:58.087 "io_failed": 0, 00:32:58.087 "io_timeout": 0, 00:32:58.088 "avg_latency_us": 96362.66965009754, 00:32:58.088 "min_latency_us": 23592.96, 00:32:58.088 "max_latency_us": 82676.18461538461 00:32:58.088 } 00:32:58.088 ], 00:32:58.088 "core_count": 1 00:32:58.088 } 00:32:58.088 14:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 113538 00:32:58.088 14:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 113538 ']' 00:32:58.088 14:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 113538 00:32:58.088 14:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:32:58.088 14:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:58.088 14:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 113538 00:32:58.088 killing process with pid 113538 00:32:58.088 Received shutdown signal, test time was about 10.000000 seconds 00:32:58.088 00:32:58.088 Latency(us) 00:32:58.088 [2024-11-06T14:37:25.723Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:58.088 [2024-11-06T14:37:25.723Z] =================================================================================================================== 00:32:58.088 [2024-11-06T14:37:25.723Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:58.088 14:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:32:58.088 14:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:32:58.088 14:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 113538' 00:32:58.088 14:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 113538 00:32:58.088 14:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 113538 00:32:58.345 14:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:32:58.345 14:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:32:58.345 14:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:58.345 14:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:32:58.607 14:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:58.607 14:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:32:58.607 14:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:58.607 14:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:58.607 rmmod nvme_tcp 00:32:58.607 rmmod nvme_fabrics 00:32:58.607 rmmod nvme_keyring 00:32:58.607 14:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:58.607 14:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:32:58.608 14:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:32:58.608 14:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 113488 ']' 00:32:58.608 14:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 113488 00:32:58.608 14:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 113488 ']' 00:32:58.608 14:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 113488 00:32:58.608 14:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:32:58.608 14:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:58.608 14:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 113488 00:32:58.608 killing process with pid 113488 00:32:58.608 14:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:32:58.608 14:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:32:58.608 14:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 113488' 00:32:58.608 14:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 113488 00:32:58.608 14:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 113488 00:32:59.186 14:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:59.186 14:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:59.186 14:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:59.186 14:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:32:59.186 14:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:59.186 14:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:32:59.186 14:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:32:59.186 14:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:59.186 14:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:32:59.186 14:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:32:59.186 14:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:32:59.186 14:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:32:59.186 14:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:32:59.186 14:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:32:59.186 14:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:32:59.186 14:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:32:59.186 14:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:32:59.186 14:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:32:59.448 14:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:32:59.448 14:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:32:59.448 14:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:59.448 14:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:59.448 14:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:32:59.448 14:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:59.448 14:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:59.448 14:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:59.448 14:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:32:59.448 00:32:59.448 real 0m14.313s 00:32:59.448 user 0m23.260s 00:32:59.448 sys 0m2.437s 00:32:59.448 14:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:59.448 14:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:59.448 ************************************ 00:32:59.448 END TEST nvmf_queue_depth 00:32:59.448 ************************************ 00:32:59.448 14:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:32:59.448 14:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:32:59.448 14:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:59.448 14:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:59.448 ************************************ 00:32:59.448 START TEST nvmf_target_multipath 00:32:59.448 ************************************ 00:32:59.448 14:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:32:59.448 * Looking for test storage... 00:32:59.448 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:32:59.448 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:59.448 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:32:59.448 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:59.710 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:59.710 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:59.710 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:59.710 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:59.710 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:32:59.710 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:32:59.710 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:32:59.710 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:32:59.710 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:32:59.710 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:32:59.710 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:32:59.710 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:59.710 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:32:59.710 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:32:59.710 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:59.710 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:59.710 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:32:59.710 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:32:59.710 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:59.710 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:32:59.711 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:32:59.711 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:32:59.711 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:32:59.711 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:59.711 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:32:59.711 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:32:59.711 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:59.711 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:59.711 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:32:59.711 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:59.711 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:59.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:59.711 --rc genhtml_branch_coverage=1 00:32:59.711 --rc genhtml_function_coverage=1 00:32:59.711 --rc genhtml_legend=1 00:32:59.711 --rc geninfo_all_blocks=1 00:32:59.711 --rc geninfo_unexecuted_blocks=1 00:32:59.711 00:32:59.711 ' 00:32:59.711 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:59.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:59.711 --rc genhtml_branch_coverage=1 00:32:59.711 --rc genhtml_function_coverage=1 00:32:59.711 --rc genhtml_legend=1 00:32:59.711 --rc geninfo_all_blocks=1 00:32:59.711 --rc geninfo_unexecuted_blocks=1 00:32:59.711 00:32:59.711 ' 00:32:59.711 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:59.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:59.711 --rc genhtml_branch_coverage=1 00:32:59.711 --rc genhtml_function_coverage=1 00:32:59.711 --rc genhtml_legend=1 00:32:59.711 --rc geninfo_all_blocks=1 00:32:59.711 --rc geninfo_unexecuted_blocks=1 00:32:59.711 00:32:59.711 ' 00:32:59.711 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:59.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:59.711 --rc genhtml_branch_coverage=1 00:32:59.711 --rc genhtml_function_coverage=1 00:32:59.711 --rc genhtml_legend=1 00:32:59.711 --rc geninfo_all_blocks=1 00:32:59.711 --rc geninfo_unexecuted_blocks=1 00:32:59.711 00:32:59.711 ' 00:32:59.711 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:32:59.711 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:32:59.711 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:59.711 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:59.711 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:59.711 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:59.711 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:59.711 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:59.711 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:59.711 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:59.711 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:59.711 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:59.711 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:32:59.711 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:32:59.711 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:59.711 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:59.711 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:32:59.711 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:59.711 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:59.711 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:32:59.711 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:59.711 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:59.711 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:59.711 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.711 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.711 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.711 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:32:59.711 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.711 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:32:59.711 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:59.711 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:59.711 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:59.711 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:59.711 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:59.711 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:59.711 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:59.711 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:59.711 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:59.711 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:59.711 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:59.711 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:59.711 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:32:59.711 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:59.711 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:32:59.711 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:59.711 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:59.711 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:59.711 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:59.711 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:59.711 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:59.711 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:59.711 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:59.711 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:32:59.711 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:32:59.711 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:32:59.712 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:32:59.712 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:32:59.712 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:32:59.712 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:59.712 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:32:59.712 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:32:59.712 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:32:59.712 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:59.712 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:32:59.712 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:32:59.712 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:32:59.712 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:32:59.712 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:32:59.712 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:32:59.712 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:59.712 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:32:59.712 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:32:59.712 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:32:59.712 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:32:59.712 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:32:59.712 Cannot find device "nvmf_init_br" 00:32:59.712 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:32:59.712 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:32:59.712 Cannot find device "nvmf_init_br2" 00:32:59.712 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:32:59.712 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:32:59.712 Cannot find device "nvmf_tgt_br" 00:32:59.712 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:32:59.712 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:32:59.712 Cannot find device "nvmf_tgt_br2" 00:32:59.712 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:32:59.712 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:32:59.712 Cannot find device "nvmf_init_br" 00:32:59.712 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:32:59.712 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:32:59.712 Cannot find device "nvmf_init_br2" 00:32:59.712 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:32:59.712 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:32:59.712 Cannot find device "nvmf_tgt_br" 00:32:59.712 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:32:59.712 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:32:59.712 Cannot find device "nvmf_tgt_br2" 00:32:59.712 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:32:59.712 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:32:59.712 Cannot find device "nvmf_br" 00:32:59.712 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:32:59.712 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:32:59.712 Cannot find device "nvmf_init_if" 00:32:59.712 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:32:59.712 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:32:59.712 Cannot find device "nvmf_init_if2" 00:32:59.712 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:32:59.712 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:59.712 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:59.712 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:32:59.712 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:59.712 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:59.712 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:32:59.712 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:32:59.712 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:32:59.712 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:32:59.712 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:32:59.712 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:32:59.973 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:32:59.973 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:32:59.973 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:32:59.973 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:32:59.973 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:32:59.973 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:32:59.973 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:32:59.973 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:32:59.973 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:32:59.973 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:32:59.973 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:32:59.973 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:32:59.973 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:32:59.973 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:32:59.973 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:32:59.973 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:32:59.973 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:32:59.973 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:32:59.973 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:32:59.973 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:32:59.973 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:32:59.973 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:32:59.974 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:32:59.974 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:32:59.974 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:32:59.974 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:32:59.974 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:32:59.974 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:32:59.974 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:32:59.974 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:32:59.974 00:32:59.974 --- 10.0.0.3 ping statistics --- 00:32:59.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:59.974 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:32:59.974 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:32:59.974 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:32:59.974 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:32:59.974 00:32:59.974 --- 10.0.0.4 ping statistics --- 00:32:59.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:59.974 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:32:59.974 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:32:59.974 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:59.974 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:32:59.974 00:32:59.974 --- 10.0.0.1 ping statistics --- 00:32:59.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:59.974 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:32:59.974 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:32:59.974 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:59.974 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:32:59.974 00:32:59.974 --- 10.0.0.2 ping statistics --- 00:32:59.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:59.974 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:32:59.974 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:59.974 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:32:59.974 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:59.974 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:59.974 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:59.974 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:59.974 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:59.974 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:59.974 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:59.974 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:32:59.974 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:32:59.974 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:32:59.974 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:59.974 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:59.974 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:32:59.974 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=113927 00:32:59.974 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 113927 00:32:59.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:59.974 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@833 -- # '[' -z 113927 ']' 00:32:59.974 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:59.974 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:59.974 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:59.974 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:59.974 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:32:59.974 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:33:00.235 [2024-11-06 14:37:27.614109] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:00.235 [2024-11-06 14:37:27.615994] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:33:00.235 [2024-11-06 14:37:27.616074] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:00.235 [2024-11-06 14:37:27.774153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:00.497 [2024-11-06 14:37:27.905330] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:00.497 [2024-11-06 14:37:27.905399] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:00.497 [2024-11-06 14:37:27.905411] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:00.497 [2024-11-06 14:37:27.905423] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:00.497 [2024-11-06 14:37:27.905432] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:00.497 [2024-11-06 14:37:27.907456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:00.497 [2024-11-06 14:37:27.907702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:00.497 [2024-11-06 14:37:27.908069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:00.497 [2024-11-06 14:37:27.908180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:00.757 [2024-11-06 14:37:28.204622] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:00.757 [2024-11-06 14:37:28.205711] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:00.757 [2024-11-06 14:37:28.206739] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:00.757 [2024-11-06 14:37:28.207122] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:00.757 [2024-11-06 14:37:28.208076] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:01.017 14:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:01.017 14:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@866 -- # return 0 00:33:01.017 14:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:01.017 14:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:01.017 14:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:33:01.017 14:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:01.017 14:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:01.278 [2024-11-06 14:37:28.709474] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:01.278 14:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:33:01.539 Malloc0 00:33:01.539 14:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:33:01.803 14:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:02.062 14:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:33:02.063 [2024-11-06 14:37:29.673555] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:33:02.324 14:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:33:02.324 [2024-11-06 14:37:29.893469] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:33:02.324 14:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid=9ca3beaa-4765-4780-96a0-c04fd97e7435 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:33:02.585 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid=9ca3beaa-4765-4780-96a0-c04fd97e7435 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:33:02.585 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:33:02.585 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # local i=0 00:33:02.585 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:33:02.585 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:33:02.585 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # sleep 2 00:33:05.126 14:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:33:05.126 14:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:33:05.126 14:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:33:05.126 14:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:33:05.126 14:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:33:05.126 14:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # return 0 00:33:05.126 14:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:33:05.126 14:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:33:05.126 14:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:33:05.126 14:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:33:05.126 14:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:33:05.126 14:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:33:05.126 14:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:33:05.126 14:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:33:05.126 14:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:33:05.126 14:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:33:05.126 14:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:33:05.126 14:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:33:05.126 14:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:33:05.126 14:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:33:05.126 14:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:33:05.126 14:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:33:05.126 14:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:33:05.126 14:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:33:05.126 14:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:33:05.126 14:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:33:05.126 14:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:33:05.126 14:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:33:05.126 14:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:33:05.126 14:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:33:05.126 14:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:33:05.126 14:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:33:05.126 14:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=114060 00:33:05.126 14:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:33:05.126 14:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:33:05.126 [global] 00:33:05.126 thread=1 00:33:05.126 invalidate=1 00:33:05.126 rw=randrw 00:33:05.126 time_based=1 00:33:05.126 runtime=6 00:33:05.126 ioengine=libaio 00:33:05.126 direct=1 00:33:05.126 bs=4096 00:33:05.126 iodepth=128 00:33:05.126 norandommap=0 00:33:05.126 numjobs=1 00:33:05.126 00:33:05.126 verify_dump=1 00:33:05.126 verify_backlog=512 00:33:05.126 verify_state_save=0 00:33:05.126 do_verify=1 00:33:05.126 verify=crc32c-intel 00:33:05.127 [job0] 00:33:05.127 filename=/dev/nvme0n1 00:33:05.127 Could not set queue depth (nvme0n1) 00:33:05.127 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:05.127 fio-3.35 00:33:05.127 Starting 1 thread 00:33:05.693 14:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:33:05.951 14:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:33:06.210 14:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:33:06.210 14:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:33:06.210 14:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:33:06.210 14:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:33:06.210 14:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:33:06.210 14:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:33:06.210 14:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:33:06.210 14:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:33:06.210 14:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:33:06.210 14:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:33:06.210 14:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:33:06.210 14:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:33:06.210 14:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:33:07.144 14:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:33:07.144 14:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:33:07.144 14:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:33:07.144 14:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:33:07.403 14:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:33:07.661 14:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:33:07.661 14:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:33:07.661 14:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:33:07.661 14:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:33:07.661 14:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:33:07.661 14:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:33:07.661 14:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:33:07.661 14:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:33:07.661 14:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:33:07.661 14:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:33:07.661 14:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:33:07.661 14:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:33:07.661 14:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:33:08.595 14:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:33:08.595 14:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:33:08.595 14:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:33:08.595 14:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 114060 00:33:11.124 00:33:11.124 job0: (groupid=0, jobs=1): err= 0: pid=114081: Wed Nov 6 14:37:38 2024 00:33:11.124 read: IOPS=13.9k, BW=54.2MiB/s (56.9MB/s)(326MiB/6005msec) 00:33:11.124 slat (usec): min=2, max=4859, avg=43.13, stdev=224.13 00:33:11.124 clat (usec): min=469, max=13532, avg=6203.33, stdev=1070.04 00:33:11.124 lat (usec): min=476, max=13538, avg=6246.46, stdev=1085.27 00:33:11.124 clat percentiles (usec): 00:33:11.124 | 1.00th=[ 3687], 5.00th=[ 4686], 10.00th=[ 5080], 20.00th=[ 5538], 00:33:11.124 | 30.00th=[ 5735], 40.00th=[ 5932], 50.00th=[ 6063], 60.00th=[ 6259], 00:33:11.124 | 70.00th=[ 6456], 80.00th=[ 6783], 90.00th=[ 7504], 95.00th=[ 8291], 00:33:11.124 | 99.00th=[ 9372], 99.50th=[10028], 99.90th=[11994], 99.95th=[12256], 00:33:11.124 | 99.99th=[13042] 00:33:11.124 bw ( KiB/s): min=13192, max=37392, per=51.78%, avg=28751.33, stdev=7309.66, samples=12 00:33:11.124 iops : min= 3298, max= 9348, avg=7187.83, stdev=1827.42, samples=12 00:33:11.124 write: IOPS=8102, BW=31.6MiB/s (33.2MB/s)(169MiB/5328msec); 0 zone resets 00:33:11.124 slat (usec): min=8, max=3851, avg=50.29, stdev=132.93 00:33:11.124 clat (usec): min=435, max=13389, avg=5669.39, stdev=863.47 00:33:11.124 lat (usec): min=455, max=13411, avg=5719.68, stdev=867.23 00:33:11.124 clat percentiles (usec): 00:33:11.124 | 1.00th=[ 3064], 5.00th=[ 4359], 10.00th=[ 4948], 20.00th=[ 5211], 00:33:11.124 | 30.00th=[ 5407], 40.00th=[ 5538], 50.00th=[ 5669], 60.00th=[ 5800], 00:33:11.124 | 70.00th=[ 5932], 80.00th=[ 6063], 90.00th=[ 6390], 95.00th=[ 7242], 00:33:11.124 | 99.00th=[ 8356], 99.50th=[ 8979], 99.90th=[10552], 99.95th=[10814], 00:33:11.124 | 99.99th=[11994] 00:33:11.124 bw ( KiB/s): min=13808, max=36984, per=88.68%, avg=28739.33, stdev=6914.32, samples=12 00:33:11.124 iops : min= 3452, max= 9246, avg=7184.83, stdev=1728.58, samples=12 00:33:11.124 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:33:11.124 lat (msec) : 2=0.06%, 4=2.23%, 10=97.29%, 20=0.40% 00:33:11.124 cpu : usr=3.21%, sys=16.52%, ctx=9350, majf=0, minf=114 00:33:11.124 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:33:11.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:11.124 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:11.124 issued rwts: total=83360,43169,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:11.124 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:11.124 00:33:11.124 Run status group 0 (all jobs): 00:33:11.124 READ: bw=54.2MiB/s (56.9MB/s), 54.2MiB/s-54.2MiB/s (56.9MB/s-56.9MB/s), io=326MiB (341MB), run=6005-6005msec 00:33:11.124 WRITE: bw=31.6MiB/s (33.2MB/s), 31.6MiB/s-31.6MiB/s (33.2MB/s-33.2MB/s), io=169MiB (177MB), run=5328-5328msec 00:33:11.124 00:33:11.124 Disk stats (read/write): 00:33:11.124 nvme0n1: ios=82481/42005, merge=0/0, ticks=487407/229383, in_queue=716790, util=98.50% 00:33:11.124 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:33:11.124 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:33:11.383 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:33:11.383 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:33:11.383 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:33:11.383 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:33:11.383 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:33:11.383 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:33:11.383 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:33:11.383 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:33:11.383 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:33:11.383 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:33:11.383 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:33:11.383 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:33:11.383 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:33:12.317 14:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:33:12.317 14:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:33:12.317 14:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:33:12.317 14:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:33:12.317 14:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=114214 00:33:12.317 14:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:33:12.317 14:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:33:12.317 [global] 00:33:12.317 thread=1 00:33:12.317 invalidate=1 00:33:12.317 rw=randrw 00:33:12.317 time_based=1 00:33:12.317 runtime=6 00:33:12.317 ioengine=libaio 00:33:12.317 direct=1 00:33:12.317 bs=4096 00:33:12.317 iodepth=128 00:33:12.317 norandommap=0 00:33:12.317 numjobs=1 00:33:12.317 00:33:12.317 verify_dump=1 00:33:12.317 verify_backlog=512 00:33:12.317 verify_state_save=0 00:33:12.317 do_verify=1 00:33:12.317 verify=crc32c-intel 00:33:12.317 [job0] 00:33:12.317 filename=/dev/nvme0n1 00:33:12.317 Could not set queue depth (nvme0n1) 00:33:12.586 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:12.586 fio-3.35 00:33:12.586 Starting 1 thread 00:33:13.539 14:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:33:13.539 14:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:33:13.797 14:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:33:13.797 14:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:33:13.797 14:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:33:13.797 14:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:33:13.797 14:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:33:13.797 14:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:33:13.797 14:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:33:13.797 14:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:33:13.797 14:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:33:13.797 14:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:33:13.797 14:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:33:13.797 14:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:33:13.797 14:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:33:14.732 14:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:33:14.732 14:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:33:14.732 14:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:33:14.732 14:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:33:14.990 14:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:33:15.249 14:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:33:15.249 14:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:33:15.249 14:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:33:15.249 14:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:33:15.249 14:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:33:15.249 14:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:33:15.249 14:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:33:15.249 14:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:33:15.249 14:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:33:15.249 14:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:33:15.249 14:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:33:15.249 14:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:33:15.249 14:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:33:16.184 14:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:33:16.184 14:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:33:16.184 14:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:33:16.184 14:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 114214 00:33:18.713 00:33:18.713 job0: (groupid=0, jobs=1): err= 0: pid=114236: Wed Nov 6 14:37:46 2024 00:33:18.713 read: IOPS=15.2k, BW=59.3MiB/s (62.2MB/s)(356MiB/6005msec) 00:33:18.713 slat (usec): min=2, max=4683, avg=34.70, stdev=204.27 00:33:18.713 clat (usec): min=200, max=11865, avg=5707.26, stdev=1533.92 00:33:18.713 lat (usec): min=209, max=11871, avg=5741.96, stdev=1549.14 00:33:18.713 clat percentiles (usec): 00:33:18.713 | 1.00th=[ 1352], 5.00th=[ 2147], 10.00th=[ 3490], 20.00th=[ 4948], 00:33:18.713 | 30.00th=[ 5473], 40.00th=[ 5800], 50.00th=[ 5932], 60.00th=[ 6128], 00:33:18.713 | 70.00th=[ 6325], 80.00th=[ 6587], 90.00th=[ 7177], 95.00th=[ 7898], 00:33:18.713 | 99.00th=[ 9110], 99.50th=[ 9503], 99.90th=[10421], 99.95th=[10814], 00:33:18.713 | 99.99th=[11600] 00:33:18.713 bw ( KiB/s): min= 8112, max=52568, per=52.76%, avg=32048.67, stdev=12978.23, samples=12 00:33:18.713 iops : min= 2028, max=13142, avg=8012.17, stdev=3244.56, samples=12 00:33:18.713 write: IOPS=9491, BW=37.1MiB/s (38.9MB/s)(188MiB/5072msec); 0 zone resets 00:33:18.713 slat (usec): min=7, max=2498, avg=39.80, stdev=106.82 00:33:18.713 clat (usec): min=343, max=10582, avg=5022.67, stdev=1571.13 00:33:18.713 lat (usec): min=357, max=10597, avg=5062.46, stdev=1580.01 00:33:18.713 clat percentiles (usec): 00:33:18.713 | 1.00th=[ 955], 5.00th=[ 1385], 10.00th=[ 2212], 20.00th=[ 3916], 00:33:18.713 | 30.00th=[ 5080], 40.00th=[ 5342], 50.00th=[ 5538], 60.00th=[ 5669], 00:33:18.713 | 70.00th=[ 5866], 80.00th=[ 5997], 90.00th=[ 6325], 95.00th=[ 6652], 00:33:18.713 | 99.00th=[ 7832], 99.50th=[ 8455], 99.90th=[ 9634], 99.95th=[10028], 00:33:18.713 | 99.99th=[10421] 00:33:18.713 bw ( KiB/s): min= 8120, max=52312, per=84.42%, avg=32052.67, stdev=12933.21, samples=12 00:33:18.713 iops : min= 2030, max=13078, avg=8013.17, stdev=3233.30, samples=12 00:33:18.713 lat (usec) : 250=0.01%, 500=0.08%, 750=0.15%, 1000=0.46% 00:33:18.713 lat (msec) : 2=5.29%, 4=9.34%, 10=84.51%, 20=0.17% 00:33:18.713 cpu : usr=3.45%, sys=17.54%, ctx=11840, majf=0, minf=183 00:33:18.713 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:33:18.713 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:18.713 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:18.713 issued rwts: total=91195,48140,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:18.713 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:18.713 00:33:18.713 Run status group 0 (all jobs): 00:33:18.713 READ: bw=59.3MiB/s (62.2MB/s), 59.3MiB/s-59.3MiB/s (62.2MB/s-62.2MB/s), io=356MiB (374MB), run=6005-6005msec 00:33:18.713 WRITE: bw=37.1MiB/s (38.9MB/s), 37.1MiB/s-37.1MiB/s (38.9MB/s-38.9MB/s), io=188MiB (197MB), run=5072-5072msec 00:33:18.713 00:33:18.713 Disk stats (read/write): 00:33:18.713 nvme0n1: ios=90083/47269, merge=0/0, ticks=489065/226736, in_queue=715801, util=98.55% 00:33:18.713 14:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:33:18.713 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:33:18.713 14:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:33:18.713 14:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1221 -- # local i=0 00:33:18.713 14:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:33:18.713 14:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:18.713 14:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:33:18.713 14:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:18.713 14:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1233 -- # return 0 00:33:18.713 14:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:18.973 14:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:33:18.973 14:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:33:18.973 14:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:33:18.973 14:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:33:18.973 14:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:18.973 14:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:33:18.973 14:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:18.973 14:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:33:18.973 14:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:18.973 14:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:18.973 rmmod nvme_tcp 00:33:18.973 rmmod nvme_fabrics 00:33:18.973 rmmod nvme_keyring 00:33:18.973 14:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:18.973 14:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:33:18.973 14:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:33:18.973 14:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 113927 ']' 00:33:18.973 14:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 113927 00:33:18.973 14:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@952 -- # '[' -z 113927 ']' 00:33:18.973 14:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@956 -- # kill -0 113927 00:33:18.973 14:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@957 -- # uname 00:33:18.973 14:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:18.973 14:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 113927 00:33:18.973 14:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:33:18.973 14:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:33:18.973 killing process with pid 113927 00:33:18.973 14:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@970 -- # echo 'killing process with pid 113927' 00:33:18.973 14:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@971 -- # kill 113927 00:33:18.973 14:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@976 -- # wait 113927 00:33:19.917 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:19.917 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:19.917 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:19.917 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:33:19.917 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:33:19.917 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:19.917 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:33:19.917 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:19.917 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:33:19.917 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:33:19.917 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:33:19.917 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:33:19.917 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:33:19.917 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:33:19.917 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:33:19.917 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:33:19.917 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:33:19.917 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:33:19.917 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:33:19.917 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:33:19.917 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:33:19.917 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:33:19.917 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:33:19.917 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:19.917 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:19.917 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:19.917 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:33:19.917 00:33:19.917 real 0m20.532s 00:33:19.917 user 1m9.907s 00:33:19.917 sys 0m6.440s 00:33:19.917 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:19.917 ************************************ 00:33:19.917 END TEST nvmf_target_multipath 00:33:19.917 ************************************ 00:33:19.917 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:33:20.179 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:33:20.179 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:33:20.179 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:20.179 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:20.179 ************************************ 00:33:20.179 START TEST nvmf_zcopy 00:33:20.179 ************************************ 00:33:20.179 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:33:20.179 * Looking for test storage... 00:33:20.179 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:33:20.179 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:20.179 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:20.179 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:33:20.179 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:20.179 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:20.179 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:20.179 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:20.179 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:33:20.179 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:33:20.179 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:33:20.179 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:33:20.179 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:33:20.179 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:33:20.179 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:33:20.179 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:20.179 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:33:20.179 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:33:20.179 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:20.179 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:20.179 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:33:20.179 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:33:20.179 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:20.179 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:33:20.179 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:33:20.179 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:33:20.179 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:33:20.179 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:20.179 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:33:20.179 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:33:20.179 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:20.179 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:20.179 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:33:20.179 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:20.179 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:20.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:20.179 --rc genhtml_branch_coverage=1 00:33:20.179 --rc genhtml_function_coverage=1 00:33:20.179 --rc genhtml_legend=1 00:33:20.179 --rc geninfo_all_blocks=1 00:33:20.179 --rc geninfo_unexecuted_blocks=1 00:33:20.179 00:33:20.179 ' 00:33:20.179 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:20.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:20.179 --rc genhtml_branch_coverage=1 00:33:20.179 --rc genhtml_function_coverage=1 00:33:20.179 --rc genhtml_legend=1 00:33:20.179 --rc geninfo_all_blocks=1 00:33:20.179 --rc geninfo_unexecuted_blocks=1 00:33:20.179 00:33:20.179 ' 00:33:20.179 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:20.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:20.179 --rc genhtml_branch_coverage=1 00:33:20.179 --rc genhtml_function_coverage=1 00:33:20.179 --rc genhtml_legend=1 00:33:20.179 --rc geninfo_all_blocks=1 00:33:20.179 --rc geninfo_unexecuted_blocks=1 00:33:20.179 00:33:20.179 ' 00:33:20.179 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:20.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:20.179 --rc genhtml_branch_coverage=1 00:33:20.179 --rc genhtml_function_coverage=1 00:33:20.179 --rc genhtml_legend=1 00:33:20.179 --rc geninfo_all_blocks=1 00:33:20.179 --rc geninfo_unexecuted_blocks=1 00:33:20.179 00:33:20.179 ' 00:33:20.179 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:33:20.179 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:33:20.179 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:20.179 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:20.179 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:20.179 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:20.179 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:20.179 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:20.179 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:20.179 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:20.179 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:20.179 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:20.179 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:33:20.179 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:33:20.179 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:20.179 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:20.179 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:33:20.179 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:20.179 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:20.179 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:33:20.179 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:20.179 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:20.179 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:20.179 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:20.180 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:20.180 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:20.180 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:33:20.180 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:20.180 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:33:20.180 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:20.180 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:20.180 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:20.180 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:20.180 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:20.180 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:20.180 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:20.180 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:20.180 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:20.180 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:20.180 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:33:20.180 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:20.180 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:20.180 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:20.180 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:20.180 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:20.180 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:20.180 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:20.180 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:20.180 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:33:20.180 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:33:20.180 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:33:20.180 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:33:20.180 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:33:20.180 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:33:20.180 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:20.180 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:33:20.180 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:33:20.180 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:33:20.180 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:20.180 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:33:20.180 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:33:20.180 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:33:20.180 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:33:20.180 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:33:20.180 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:33:20.180 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:20.180 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:33:20.180 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:33:20.180 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:33:20.180 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:33:20.180 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:33:20.180 Cannot find device "nvmf_init_br" 00:33:20.180 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:33:20.180 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:33:20.180 Cannot find device "nvmf_init_br2" 00:33:20.180 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:33:20.180 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:33:20.180 Cannot find device "nvmf_tgt_br" 00:33:20.180 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:33:20.180 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:33:20.180 Cannot find device "nvmf_tgt_br2" 00:33:20.180 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:33:20.180 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:33:20.180 Cannot find device "nvmf_init_br" 00:33:20.180 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:33:20.180 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:33:20.440 Cannot find device "nvmf_init_br2" 00:33:20.440 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:33:20.440 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:33:20.440 Cannot find device "nvmf_tgt_br" 00:33:20.440 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:33:20.440 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:33:20.440 Cannot find device "nvmf_tgt_br2" 00:33:20.440 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:33:20.440 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:33:20.440 Cannot find device "nvmf_br" 00:33:20.440 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:33:20.440 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:33:20.440 Cannot find device "nvmf_init_if" 00:33:20.440 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:33:20.440 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:33:20.440 Cannot find device "nvmf_init_if2" 00:33:20.440 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:33:20.440 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:33:20.440 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:20.440 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:33:20.440 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:33:20.440 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:20.440 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:33:20.440 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:33:20.440 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:33:20.440 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:33:20.440 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:33:20.440 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:33:20.440 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:33:20.440 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:33:20.440 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:33:20.440 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:33:20.440 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:33:20.440 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:33:20.440 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:33:20.440 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:33:20.440 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:33:20.440 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:33:20.440 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:33:20.440 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:33:20.440 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:33:20.440 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:33:20.440 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:33:20.440 14:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:33:20.440 14:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:33:20.440 14:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:33:20.440 14:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:33:20.440 14:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:33:20.440 14:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:33:20.440 14:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:33:20.440 14:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:33:20.440 14:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:33:20.440 14:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:33:20.440 14:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:33:20.440 14:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:33:20.440 14:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:33:20.440 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:33:20.440 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:33:20.440 00:33:20.440 --- 10.0.0.3 ping statistics --- 00:33:20.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:20.440 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:33:20.440 14:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:33:20.440 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:33:20.440 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.025 ms 00:33:20.440 00:33:20.440 --- 10.0.0.4 ping statistics --- 00:33:20.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:20.440 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:33:20.441 14:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:33:20.441 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:20.441 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.015 ms 00:33:20.441 00:33:20.441 --- 10.0.0.1 ping statistics --- 00:33:20.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:20.441 rtt min/avg/max/mdev = 0.015/0.015/0.015/0.000 ms 00:33:20.441 14:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:33:20.441 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:20.441 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:33:20.441 00:33:20.441 --- 10.0.0.2 ping statistics --- 00:33:20.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:20.441 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:33:20.441 14:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:20.441 14:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:33:20.441 14:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:20.441 14:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:20.441 14:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:20.441 14:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:20.441 14:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:20.441 14:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:20.441 14:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:20.701 14:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:33:20.701 14:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:20.701 14:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:20.701 14:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:20.701 14:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=114569 00:33:20.701 14:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:33:20.701 14:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 114569 00:33:20.701 14:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 114569 ']' 00:33:20.701 14:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:20.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:20.702 14:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:20.702 14:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:20.702 14:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:20.702 14:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:20.702 [2024-11-06 14:37:48.168244] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:20.702 [2024-11-06 14:37:48.170224] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:33:20.702 [2024-11-06 14:37:48.170727] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:20.702 [2024-11-06 14:37:48.331693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:20.963 [2024-11-06 14:37:48.452773] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:20.963 [2024-11-06 14:37:48.452837] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:20.963 [2024-11-06 14:37:48.452855] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:20.963 [2024-11-06 14:37:48.452866] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:20.963 [2024-11-06 14:37:48.452875] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:20.963 [2024-11-06 14:37:48.453878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:21.224 [2024-11-06 14:37:48.693847] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:21.224 [2024-11-06 14:37:48.694171] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:21.485 14:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:21.485 14:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:33:21.485 14:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:21.485 14:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:21.485 14:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:21.485 14:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:21.485 14:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:33:21.485 14:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:33:21.485 14:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.485 14:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:21.485 [2024-11-06 14:37:49.075035] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:21.485 14:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.485 14:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:33:21.485 14:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.485 14:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:21.485 14:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.485 14:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:33:21.485 14:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.485 14:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:21.485 [2024-11-06 14:37:49.099202] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:33:21.485 14:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.485 14:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:33:21.485 14:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.485 14:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:21.485 14:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.485 14:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:33:21.485 14:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.485 14:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:21.746 malloc0 00:33:21.746 14:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.746 14:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:33:21.746 14:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.746 14:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:21.746 14:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.746 14:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:33:21.746 14:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:33:21.746 14:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:33:21.746 14:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:33:21.746 14:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:21.746 14:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:21.746 { 00:33:21.746 "params": { 00:33:21.746 "name": "Nvme$subsystem", 00:33:21.746 "trtype": "$TEST_TRANSPORT", 00:33:21.746 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:21.746 "adrfam": "ipv4", 00:33:21.746 "trsvcid": "$NVMF_PORT", 00:33:21.746 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:21.746 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:21.746 "hdgst": ${hdgst:-false}, 00:33:21.746 "ddgst": ${ddgst:-false} 00:33:21.746 }, 00:33:21.746 "method": "bdev_nvme_attach_controller" 00:33:21.746 } 00:33:21.746 EOF 00:33:21.746 )") 00:33:21.746 14:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:33:21.747 14:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:33:21.747 14:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:33:21.747 14:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:21.747 "params": { 00:33:21.747 "name": "Nvme1", 00:33:21.747 "trtype": "tcp", 00:33:21.747 "traddr": "10.0.0.3", 00:33:21.747 "adrfam": "ipv4", 00:33:21.747 "trsvcid": "4420", 00:33:21.747 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:21.747 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:21.747 "hdgst": false, 00:33:21.747 "ddgst": false 00:33:21.747 }, 00:33:21.747 "method": "bdev_nvme_attach_controller" 00:33:21.747 }' 00:33:21.747 [2024-11-06 14:37:49.219332] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:33:21.747 [2024-11-06 14:37:49.219423] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114620 ] 00:33:21.747 [2024-11-06 14:37:49.375582] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:22.007 [2024-11-06 14:37:49.498679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:22.274 Running I/O for 10 seconds... 00:33:24.603 5372.00 IOPS, 41.97 MiB/s [2024-11-06T14:37:53.177Z] 5628.50 IOPS, 43.97 MiB/s [2024-11-06T14:37:54.139Z] 5879.33 IOPS, 45.93 MiB/s [2024-11-06T14:37:55.101Z] 6353.50 IOPS, 49.64 MiB/s [2024-11-06T14:37:56.036Z] 6639.40 IOPS, 51.87 MiB/s [2024-11-06T14:37:56.971Z] 6829.83 IOPS, 53.36 MiB/s [2024-11-06T14:37:57.906Z] 6965.29 IOPS, 54.42 MiB/s [2024-11-06T14:37:59.281Z] 7066.38 IOPS, 55.21 MiB/s [2024-11-06T14:38:00.214Z] 7145.44 IOPS, 55.82 MiB/s [2024-11-06T14:38:00.214Z] 7211.80 IOPS, 56.34 MiB/s 00:33:32.579 Latency(us) 00:33:32.579 [2024-11-06T14:38:00.214Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:32.579 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:33:32.579 Verification LBA range: start 0x0 length 0x1000 00:33:32.579 Nvme1n1 : 10.01 7215.83 56.37 0.00 0.00 17693.25 863.31 32667.18 00:33:32.579 [2024-11-06T14:38:00.214Z] =================================================================================================================== 00:33:32.579 [2024-11-06T14:38:00.214Z] Total : 7215.83 56.37 0.00 0.00 17693.25 863.31 32667.18 00:33:33.144 14:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=114739 00:33:33.144 14:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:33:33.144 14:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:33.144 14:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:33:33.144 14:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:33:33.144 14:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:33:33.144 14:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:33:33.144 14:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:33.144 14:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:33.144 { 00:33:33.144 "params": { 00:33:33.144 "name": "Nvme$subsystem", 00:33:33.144 "trtype": "$TEST_TRANSPORT", 00:33:33.144 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:33.144 "adrfam": "ipv4", 00:33:33.144 "trsvcid": "$NVMF_PORT", 00:33:33.144 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:33.144 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:33.144 "hdgst": ${hdgst:-false}, 00:33:33.144 "ddgst": ${ddgst:-false} 00:33:33.144 }, 00:33:33.144 "method": "bdev_nvme_attach_controller" 00:33:33.144 } 00:33:33.144 EOF 00:33:33.144 )") 00:33:33.144 14:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:33:33.144 [2024-11-06 14:38:00.490596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.144 [2024-11-06 14:38:00.490635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.144 14:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:33:33.144 14:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:33:33.144 14:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:33.144 "params": { 00:33:33.144 "name": "Nvme1", 00:33:33.144 "trtype": "tcp", 00:33:33.144 "traddr": "10.0.0.3", 00:33:33.144 "adrfam": "ipv4", 00:33:33.144 "trsvcid": "4420", 00:33:33.144 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:33.144 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:33.144 "hdgst": false, 00:33:33.144 "ddgst": false 00:33:33.144 }, 00:33:33.144 "method": "bdev_nvme_attach_controller" 00:33:33.144 }' 00:33:33.144 2024/11/06 14:38:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.144 [2024-11-06 14:38:00.498559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.144 [2024-11-06 14:38:00.498582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.144 2024/11/06 14:38:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.144 [2024-11-06 14:38:00.506554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.144 [2024-11-06 14:38:00.506578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.144 2024/11/06 14:38:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.144 [2024-11-06 14:38:00.514543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.144 [2024-11-06 14:38:00.514566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.144 2024/11/06 14:38:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.144 [2024-11-06 14:38:00.522560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.144 [2024-11-06 14:38:00.522583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.144 2024/11/06 14:38:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.144 [2024-11-06 14:38:00.530555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.144 [2024-11-06 14:38:00.530578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.144 2024/11/06 14:38:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.144 [2024-11-06 14:38:00.542549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.144 [2024-11-06 14:38:00.542571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.144 2024/11/06 14:38:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.144 [2024-11-06 14:38:00.550553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.144 [2024-11-06 14:38:00.550574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.144 [2024-11-06 14:38:00.553329] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:33:33.144 [2024-11-06 14:38:00.553437] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114739 ] 00:33:33.144 2024/11/06 14:38:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.144 [2024-11-06 14:38:00.558561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.144 [2024-11-06 14:38:00.558582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.144 2024/11/06 14:38:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.144 [2024-11-06 14:38:00.566544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.144 [2024-11-06 14:38:00.566566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.144 2024/11/06 14:38:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.144 [2024-11-06 14:38:00.574557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.144 [2024-11-06 14:38:00.574577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.144 2024/11/06 14:38:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.144 [2024-11-06 14:38:00.582544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.144 [2024-11-06 14:38:00.582565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.144 2024/11/06 14:38:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.144 [2024-11-06 14:38:00.594560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.144 [2024-11-06 14:38:00.594581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.144 2024/11/06 14:38:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.144 [2024-11-06 14:38:00.602555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.144 [2024-11-06 14:38:00.602575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.144 2024/11/06 14:38:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.144 [2024-11-06 14:38:00.610548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.144 [2024-11-06 14:38:00.610569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.144 2024/11/06 14:38:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.144 [2024-11-06 14:38:00.618554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.144 [2024-11-06 14:38:00.618575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.144 2024/11/06 14:38:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.144 [2024-11-06 14:38:00.626556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.144 [2024-11-06 14:38:00.626578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.144 2024/11/06 14:38:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.144 [2024-11-06 14:38:00.634545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.144 [2024-11-06 14:38:00.634567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.144 2024/11/06 14:38:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.144 [2024-11-06 14:38:00.642558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.144 [2024-11-06 14:38:00.642580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.144 2024/11/06 14:38:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.144 [2024-11-06 14:38:00.650545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.144 [2024-11-06 14:38:00.650566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.144 2024/11/06 14:38:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.144 [2024-11-06 14:38:00.658568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.144 [2024-11-06 14:38:00.658589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.144 2024/11/06 14:38:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.144 [2024-11-06 14:38:00.666557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.144 [2024-11-06 14:38:00.666578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.144 2024/11/06 14:38:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.144 [2024-11-06 14:38:00.674544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.144 [2024-11-06 14:38:00.674564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.144 2024/11/06 14:38:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.144 [2024-11-06 14:38:00.682555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.144 [2024-11-06 14:38:00.682575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.144 2024/11/06 14:38:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.144 [2024-11-06 14:38:00.690555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.144 [2024-11-06 14:38:00.690576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.144 2024/11/06 14:38:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.144 [2024-11-06 14:38:00.698547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.144 [2024-11-06 14:38:00.698569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.144 2024/11/06 14:38:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.144 [2024-11-06 14:38:00.706555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.144 [2024-11-06 14:38:00.706577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.144 [2024-11-06 14:38:00.707298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:33.144 2024/11/06 14:38:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.144 [2024-11-06 14:38:00.714546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.144 [2024-11-06 14:38:00.714567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.145 2024/11/06 14:38:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.145 [2024-11-06 14:38:00.722555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.145 [2024-11-06 14:38:00.722576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.145 2024/11/06 14:38:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.145 [2024-11-06 14:38:00.730554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.145 [2024-11-06 14:38:00.730577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.145 2024/11/06 14:38:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.145 [2024-11-06 14:38:00.738545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.145 [2024-11-06 14:38:00.738565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.145 2024/11/06 14:38:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.145 [2024-11-06 14:38:00.746558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.145 [2024-11-06 14:38:00.746584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.145 2024/11/06 14:38:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.145 [2024-11-06 14:38:00.754565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.145 [2024-11-06 14:38:00.754586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.145 2024/11/06 14:38:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.145 [2024-11-06 14:38:00.762544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.145 [2024-11-06 14:38:00.762565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.145 2024/11/06 14:38:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.145 [2024-11-06 14:38:00.770555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.145 [2024-11-06 14:38:00.770576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.145 2024/11/06 14:38:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.404 [2024-11-06 14:38:00.778549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.404 [2024-11-06 14:38:00.778571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.404 2024/11/06 14:38:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.404 [2024-11-06 14:38:00.786557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.404 [2024-11-06 14:38:00.786578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.404 2024/11/06 14:38:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.404 [2024-11-06 14:38:00.794557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.404 [2024-11-06 14:38:00.794579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.404 [2024-11-06 14:38:00.797608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:33.404 2024/11/06 14:38:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.404 [2024-11-06 14:38:00.802550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.404 [2024-11-06 14:38:00.802571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.404 2024/11/06 14:38:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.404 [2024-11-06 14:38:00.810558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.404 [2024-11-06 14:38:00.810579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.404 2024/11/06 14:38:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.404 [2024-11-06 14:38:00.818555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.404 [2024-11-06 14:38:00.818576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.404 2024/11/06 14:38:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.404 [2024-11-06 14:38:00.826545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.404 [2024-11-06 14:38:00.826566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.404 2024/11/06 14:38:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.404 [2024-11-06 14:38:00.834558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.404 [2024-11-06 14:38:00.834579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.404 2024/11/06 14:38:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.404 [2024-11-06 14:38:00.842544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.404 [2024-11-06 14:38:00.842567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.404 2024/11/06 14:38:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.404 [2024-11-06 14:38:00.850566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.404 [2024-11-06 14:38:00.850587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.404 2024/11/06 14:38:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.404 [2024-11-06 14:38:00.858556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.404 [2024-11-06 14:38:00.858578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.404 2024/11/06 14:38:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.404 [2024-11-06 14:38:00.866545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.404 [2024-11-06 14:38:00.866568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.404 2024/11/06 14:38:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.404 [2024-11-06 14:38:00.874555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.404 [2024-11-06 14:38:00.874577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.404 2024/11/06 14:38:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.404 [2024-11-06 14:38:00.882559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.404 [2024-11-06 14:38:00.882580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.404 2024/11/06 14:38:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.404 [2024-11-06 14:38:00.890546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.404 [2024-11-06 14:38:00.890567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.404 2024/11/06 14:38:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.404 [2024-11-06 14:38:00.898556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.404 [2024-11-06 14:38:00.898578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.404 2024/11/06 14:38:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.404 [2024-11-06 14:38:00.906543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.404 [2024-11-06 14:38:00.906564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.404 2024/11/06 14:38:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.404 [2024-11-06 14:38:00.914557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.404 [2024-11-06 14:38:00.914579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.404 2024/11/06 14:38:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.404 [2024-11-06 14:38:00.922559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.404 [2024-11-06 14:38:00.922580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.404 2024/11/06 14:38:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.404 [2024-11-06 14:38:00.930544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.404 [2024-11-06 14:38:00.930566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.404 2024/11/06 14:38:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.404 [2024-11-06 14:38:00.938559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.404 [2024-11-06 14:38:00.938581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.404 2024/11/06 14:38:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.404 [2024-11-06 14:38:00.946569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.404 [2024-11-06 14:38:00.946592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.404 2024/11/06 14:38:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.404 [2024-11-06 14:38:00.954543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.404 [2024-11-06 14:38:00.954564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.404 2024/11/06 14:38:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.404 [2024-11-06 14:38:00.962557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.404 [2024-11-06 14:38:00.962579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.404 2024/11/06 14:38:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.404 [2024-11-06 14:38:00.970548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.404 [2024-11-06 14:38:00.970569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.404 2024/11/06 14:38:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.404 [2024-11-06 14:38:00.978558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.405 [2024-11-06 14:38:00.978579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.405 2024/11/06 14:38:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.405 [2024-11-06 14:38:00.986553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.405 [2024-11-06 14:38:00.986574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.405 2024/11/06 14:38:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.405 [2024-11-06 14:38:00.994545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.405 [2024-11-06 14:38:00.994566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.405 2024/11/06 14:38:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.405 [2024-11-06 14:38:01.002561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.405 [2024-11-06 14:38:01.002583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.405 2024/11/06 14:38:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.405 [2024-11-06 14:38:01.010565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.405 [2024-11-06 14:38:01.010588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.405 2024/11/06 14:38:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.405 [2024-11-06 14:38:01.018549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.405 [2024-11-06 14:38:01.018571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.405 2024/11/06 14:38:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.405 [2024-11-06 14:38:01.026559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.405 [2024-11-06 14:38:01.026582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.405 2024/11/06 14:38:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.405 [2024-11-06 14:38:01.034549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.405 [2024-11-06 14:38:01.034571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.664 2024/11/06 14:38:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.664 [2024-11-06 14:38:01.042567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.664 [2024-11-06 14:38:01.042590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.664 2024/11/06 14:38:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.664 [2024-11-06 14:38:01.050558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.664 [2024-11-06 14:38:01.050581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.664 2024/11/06 14:38:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.664 [2024-11-06 14:38:01.058562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.664 [2024-11-06 14:38:01.058592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.664 2024/11/06 14:38:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.664 [2024-11-06 14:38:01.066563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.664 [2024-11-06 14:38:01.066587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.664 2024/11/06 14:38:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.664 Running I/O for 5 seconds... 00:33:33.664 [2024-11-06 14:38:01.081205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.664 [2024-11-06 14:38:01.081233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.664 2024/11/06 14:38:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.664 [2024-11-06 14:38:01.096690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.664 [2024-11-06 14:38:01.096816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.664 2024/11/06 14:38:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.664 [2024-11-06 14:38:01.113225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.664 [2024-11-06 14:38:01.113325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.664 2024/11/06 14:38:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.664 [2024-11-06 14:38:01.128511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.664 [2024-11-06 14:38:01.128611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.664 2024/11/06 14:38:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.664 [2024-11-06 14:38:01.144557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.665 [2024-11-06 14:38:01.144653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.665 2024/11/06 14:38:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.665 [2024-11-06 14:38:01.160986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.665 [2024-11-06 14:38:01.161080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.665 2024/11/06 14:38:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.665 [2024-11-06 14:38:01.176473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.665 [2024-11-06 14:38:01.176574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.665 2024/11/06 14:38:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.665 [2024-11-06 14:38:01.193092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.665 [2024-11-06 14:38:01.193189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.665 2024/11/06 14:38:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.665 [2024-11-06 14:38:01.207845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.665 [2024-11-06 14:38:01.207942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.665 2024/11/06 14:38:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.665 [2024-11-06 14:38:01.215504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.665 [2024-11-06 14:38:01.215603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.665 2024/11/06 14:38:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.665 [2024-11-06 14:38:01.231603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.665 [2024-11-06 14:38:01.231698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.665 2024/11/06 14:38:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.665 [2024-11-06 14:38:01.241086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.665 [2024-11-06 14:38:01.241186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.665 2024/11/06 14:38:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.665 [2024-11-06 14:38:01.257185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.665 [2024-11-06 14:38:01.257278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.665 2024/11/06 14:38:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.665 [2024-11-06 14:38:01.273337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.665 [2024-11-06 14:38:01.273454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.665 2024/11/06 14:38:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.665 [2024-11-06 14:38:01.287839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.665 [2024-11-06 14:38:01.287945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.665 2024/11/06 14:38:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.665 [2024-11-06 14:38:01.295508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.665 [2024-11-06 14:38:01.295536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.924 2024/11/06 14:38:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.924 [2024-11-06 14:38:01.304900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.924 [2024-11-06 14:38:01.304928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.924 2024/11/06 14:38:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.924 [2024-11-06 14:38:01.320477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.924 [2024-11-06 14:38:01.320505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.924 2024/11/06 14:38:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.924 [2024-11-06 14:38:01.336737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.924 [2024-11-06 14:38:01.336766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.924 2024/11/06 14:38:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.924 [2024-11-06 14:38:01.353032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.924 [2024-11-06 14:38:01.353059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.924 2024/11/06 14:38:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.924 [2024-11-06 14:38:01.368531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.924 [2024-11-06 14:38:01.368560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.924 2024/11/06 14:38:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.924 [2024-11-06 14:38:01.384380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.924 [2024-11-06 14:38:01.384407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.924 2024/11/06 14:38:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.924 [2024-11-06 14:38:01.400152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.924 [2024-11-06 14:38:01.400179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.924 2024/11/06 14:38:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.924 [2024-11-06 14:38:01.416448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.924 [2024-11-06 14:38:01.416476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.924 2024/11/06 14:38:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.924 [2024-11-06 14:38:01.432861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.924 [2024-11-06 14:38:01.432890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.924 2024/11/06 14:38:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.924 [2024-11-06 14:38:01.448574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.924 [2024-11-06 14:38:01.448601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.924 2024/11/06 14:38:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.924 [2024-11-06 14:38:01.463750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.924 [2024-11-06 14:38:01.463781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.924 2024/11/06 14:38:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.924 [2024-11-06 14:38:01.474738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.924 [2024-11-06 14:38:01.474849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.924 2024/11/06 14:38:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.924 [2024-11-06 14:38:01.488561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.924 [2024-11-06 14:38:01.488588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.924 2024/11/06 14:38:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.924 [2024-11-06 14:38:01.504677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.924 [2024-11-06 14:38:01.504706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.924 2024/11/06 14:38:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.924 [2024-11-06 14:38:01.520541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.924 [2024-11-06 14:38:01.520569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.924 2024/11/06 14:38:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.924 [2024-11-06 14:38:01.536564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.924 [2024-11-06 14:38:01.536592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.924 2024/11/06 14:38:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.924 [2024-11-06 14:38:01.552218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.924 [2024-11-06 14:38:01.552246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.924 2024/11/06 14:38:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:34.183 [2024-11-06 14:38:01.568240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.183 [2024-11-06 14:38:01.568267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.183 2024/11/06 14:38:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:34.183 [2024-11-06 14:38:01.584315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.183 [2024-11-06 14:38:01.584342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.183 2024/11/06 14:38:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:34.183 [2024-11-06 14:38:01.600284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.183 [2024-11-06 14:38:01.600313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.183 2024/11/06 14:38:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:34.183 [2024-11-06 14:38:01.616453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.183 [2024-11-06 14:38:01.616480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.183 2024/11/06 14:38:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:34.183 [2024-11-06 14:38:01.632764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.183 [2024-11-06 14:38:01.632792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.183 2024/11/06 14:38:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:34.183 [2024-11-06 14:38:01.648495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.183 [2024-11-06 14:38:01.648522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.183 2024/11/06 14:38:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:34.183 [2024-11-06 14:38:01.664361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.183 [2024-11-06 14:38:01.664389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.183 2024/11/06 14:38:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:34.183 [2024-11-06 14:38:01.680342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.183 [2024-11-06 14:38:01.680368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.183 2024/11/06 14:38:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:34.183 [2024-11-06 14:38:01.696351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.183 [2024-11-06 14:38:01.696378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.183 2024/11/06 14:38:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:34.183 [2024-11-06 14:38:01.712200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.183 [2024-11-06 14:38:01.712227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.183 2024/11/06 14:38:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:34.183 [2024-11-06 14:38:01.728128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.183 [2024-11-06 14:38:01.728154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.183 2024/11/06 14:38:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:34.183 [2024-11-06 14:38:01.744294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.183 [2024-11-06 14:38:01.744321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.183 2024/11/06 14:38:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:34.183 [2024-11-06 14:38:01.760243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.183 [2024-11-06 14:38:01.760277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.183 2024/11/06 14:38:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:34.183 [2024-11-06 14:38:01.776466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.183 [2024-11-06 14:38:01.776494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.184 2024/11/06 14:38:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:34.184 [2024-11-06 14:38:01.792630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.184 [2024-11-06 14:38:01.792658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.184 2024/11/06 14:38:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:34.184 [2024-11-06 14:38:01.808319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.184 [2024-11-06 14:38:01.808348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.184 2024/11/06 14:38:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:34.443 [2024-11-06 14:38:01.824511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.443 [2024-11-06 14:38:01.824539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.443 2024/11/06 14:38:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:34.443 [2024-11-06 14:38:01.840199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.443 [2024-11-06 14:38:01.840226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.443 2024/11/06 14:38:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:34.443 [2024-11-06 14:38:01.856795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.443 [2024-11-06 14:38:01.856823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.443 2024/11/06 14:38:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:34.443 [2024-11-06 14:38:01.872105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.443 [2024-11-06 14:38:01.872133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.443 2024/11/06 14:38:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:34.443 [2024-11-06 14:38:01.879482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.443 [2024-11-06 14:38:01.879509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.443 2024/11/06 14:38:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:34.443 [2024-11-06 14:38:01.891016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.443 [2024-11-06 14:38:01.891042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.443 2024/11/06 14:38:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:34.443 [2024-11-06 14:38:01.904578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.443 [2024-11-06 14:38:01.904605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.443 2024/11/06 14:38:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:34.443 [2024-11-06 14:38:01.920728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.443 [2024-11-06 14:38:01.920755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.443 2024/11/06 14:38:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:34.443 [2024-11-06 14:38:01.936642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.443 [2024-11-06 14:38:01.936670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.443 2024/11/06 14:38:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:34.443 [2024-11-06 14:38:01.952707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.443 [2024-11-06 14:38:01.952735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.443 2024/11/06 14:38:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:34.443 [2024-11-06 14:38:01.968168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.443 [2024-11-06 14:38:01.968197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.443 2024/11/06 14:38:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:34.443 [2024-11-06 14:38:01.984170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.443 [2024-11-06 14:38:01.984198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.443 2024/11/06 14:38:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:34.443 [2024-11-06 14:38:02.000387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.443 [2024-11-06 14:38:02.000414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.443 2024/11/06 14:38:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:34.443 [2024-11-06 14:38:02.016509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.443 [2024-11-06 14:38:02.016537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.443 2024/11/06 14:38:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:34.443 [2024-11-06 14:38:02.032339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.443 [2024-11-06 14:38:02.032365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.443 2024/11/06 14:38:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:34.443 [2024-11-06 14:38:02.048317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.443 [2024-11-06 14:38:02.048343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.443 2024/11/06 14:38:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:34.443 [2024-11-06 14:38:02.063994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.443 [2024-11-06 14:38:02.064021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.443 2024/11/06 14:38:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:34.702 15265.00 IOPS, 119.26 MiB/s [2024-11-06T14:38:02.337Z] [2024-11-06 14:38:02.079683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.702 [2024-11-06 14:38:02.079711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.702 2024/11/06 14:38:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:34.702 [2024-11-06 14:38:02.091451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.702 [2024-11-06 14:38:02.091478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.702 2024/11/06 14:38:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:34.702 [2024-11-06 14:38:02.102867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.702 [2024-11-06 14:38:02.102992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.702 2024/11/06 14:38:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:34.702 [2024-11-06 14:38:02.116680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.702 [2024-11-06 14:38:02.116709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.702 2024/11/06 14:38:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:34.702 [2024-11-06 14:38:02.133022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.702 [2024-11-06 14:38:02.133048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.702 2024/11/06 14:38:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:34.702 [2024-11-06 14:38:02.148546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.702 [2024-11-06 14:38:02.148649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.702 2024/11/06 14:38:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:34.702 [2024-11-06 14:38:02.164621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.702 [2024-11-06 14:38:02.164648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.702 2024/11/06 14:38:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:34.702 [2024-11-06 14:38:02.180713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.702 [2024-11-06 14:38:02.180741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.702 2024/11/06 14:38:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:34.702 [2024-11-06 14:38:02.196540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.702 [2024-11-06 14:38:02.196567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.702 2024/11/06 14:38:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:34.702 [2024-11-06 14:38:02.212241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.702 [2024-11-06 14:38:02.212268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.702 2024/11/06 14:38:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:34.702 [2024-11-06 14:38:02.228056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.702 [2024-11-06 14:38:02.228084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.702 2024/11/06 14:38:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:34.702 [2024-11-06 14:38:02.238600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.702 [2024-11-06 14:38:02.238627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.702 2024/11/06 14:38:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:34.702 [2024-11-06 14:38:02.245893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.702 [2024-11-06 14:38:02.245921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.703 2024/11/06 14:38:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:34.703 [2024-11-06 14:38:02.254916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.703 [2024-11-06 14:38:02.254943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.703 2024/11/06 14:38:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:34.703 [2024-11-06 14:38:02.267446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.703 [2024-11-06 14:38:02.267552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.703 2024/11/06 14:38:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:34.703 [2024-11-06 14:38:02.279280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.703 [2024-11-06 14:38:02.279308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.703 2024/11/06 14:38:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:34.703 [2024-11-06 14:38:02.291256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.703 [2024-11-06 14:38:02.291283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.703 2024/11/06 14:38:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:34.703 [2024-11-06 14:38:02.303156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.703 [2024-11-06 14:38:02.303183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.703 2024/11/06 14:38:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:34.703 [2024-11-06 14:38:02.315237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.703 [2024-11-06 14:38:02.315264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.703 2024/11/06 14:38:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:34.703 [2024-11-06 14:38:02.327219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.703 [2024-11-06 14:38:02.327246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.703 2024/11/06 14:38:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:34.962 [2024-11-06 14:38:02.338981] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.962 [2024-11-06 14:38:02.339007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.962 2024/11/06 14:38:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:34.962 [2024-11-06 14:38:02.352651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.962 [2024-11-06 14:38:02.352678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.962 2024/11/06 14:38:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:34.962 [2024-11-06 14:38:02.368754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.962 [2024-11-06 14:38:02.368783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.962 2024/11/06 14:38:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:34.962 [2024-11-06 14:38:02.384668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.962 [2024-11-06 14:38:02.384696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.962 2024/11/06 14:38:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:34.962 [2024-11-06 14:38:02.400470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.962 [2024-11-06 14:38:02.400497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.962 2024/11/06 14:38:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:34.962 [2024-11-06 14:38:02.416652] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.962 [2024-11-06 14:38:02.416679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.962 2024/11/06 14:38:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:34.962 [2024-11-06 14:38:02.433074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.962 [2024-11-06 14:38:02.433101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.962 2024/11/06 14:38:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:34.962 [2024-11-06 14:38:02.447777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.962 [2024-11-06 14:38:02.447807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.962 2024/11/06 14:38:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:34.962 [2024-11-06 14:38:02.457998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.962 [2024-11-06 14:38:02.458025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.962 2024/11/06 14:38:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:34.962 [2024-11-06 14:38:02.473395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.962 [2024-11-06 14:38:02.473502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.962 2024/11/06 14:38:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:34.962 [2024-11-06 14:38:02.488262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.962 [2024-11-06 14:38:02.488376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.962 2024/11/06 14:38:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:34.962 [2024-11-06 14:38:02.504570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.962 [2024-11-06 14:38:02.504598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.962 2024/11/06 14:38:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:34.962 [2024-11-06 14:38:02.520360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.962 [2024-11-06 14:38:02.520387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.962 2024/11/06 14:38:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:34.962 [2024-11-06 14:38:02.536384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.962 [2024-11-06 14:38:02.536411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.962 2024/11/06 14:38:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:34.962 [2024-11-06 14:38:02.552593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.962 [2024-11-06 14:38:02.552621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.962 2024/11/06 14:38:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:34.962 [2024-11-06 14:38:02.568242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.962 [2024-11-06 14:38:02.568269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.962 2024/11/06 14:38:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:34.962 [2024-11-06 14:38:02.584797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.962 [2024-11-06 14:38:02.584825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.962 2024/11/06 14:38:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:35.222 [2024-11-06 14:38:02.600314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.222 [2024-11-06 14:38:02.600341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.222 2024/11/06 14:38:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:35.222 [2024-11-06 14:38:02.616398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.222 [2024-11-06 14:38:02.616425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.222 2024/11/06 14:38:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:35.222 [2024-11-06 14:38:02.632452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.222 [2024-11-06 14:38:02.632479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.222 2024/11/06 14:38:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:35.222 [2024-11-06 14:38:02.648455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.222 [2024-11-06 14:38:02.648482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.222 2024/11/06 14:38:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:35.222 [2024-11-06 14:38:02.664603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.222 [2024-11-06 14:38:02.664629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.222 2024/11/06 14:38:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:35.222 [2024-11-06 14:38:02.680075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.222 [2024-11-06 14:38:02.680102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.222 2024/11/06 14:38:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:35.222 [2024-11-06 14:38:02.688107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.222 [2024-11-06 14:38:02.688134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.222 2024/11/06 14:38:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:35.222 [2024-11-06 14:38:02.697900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.222 [2024-11-06 14:38:02.697928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.222 2024/11/06 14:38:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:35.222 [2024-11-06 14:38:02.713038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.222 [2024-11-06 14:38:02.713063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.222 2024/11/06 14:38:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:35.222 [2024-11-06 14:38:02.728664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.222 [2024-11-06 14:38:02.728690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.222 2024/11/06 14:38:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:35.222 [2024-11-06 14:38:02.744342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.222 [2024-11-06 14:38:02.744370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.222 2024/11/06 14:38:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:35.222 [2024-11-06 14:38:02.760299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.222 [2024-11-06 14:38:02.760326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.222 2024/11/06 14:38:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:35.222 [2024-11-06 14:38:02.776523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.222 [2024-11-06 14:38:02.776550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.222 2024/11/06 14:38:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:35.222 [2024-11-06 14:38:02.792722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.222 [2024-11-06 14:38:02.792749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.222 2024/11/06 14:38:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:35.222 [2024-11-06 14:38:02.808723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.222 [2024-11-06 14:38:02.808752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.222 2024/11/06 14:38:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:35.222 [2024-11-06 14:38:02.824800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.222 [2024-11-06 14:38:02.824829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.222 2024/11/06 14:38:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:35.222 [2024-11-06 14:38:02.840688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.222 [2024-11-06 14:38:02.840716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.223 2024/11/06 14:38:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:35.482 [2024-11-06 14:38:02.856321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.482 [2024-11-06 14:38:02.856348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.482 2024/11/06 14:38:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:35.482 [2024-11-06 14:38:02.872602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.482 [2024-11-06 14:38:02.872629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.482 2024/11/06 14:38:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:35.482 [2024-11-06 14:38:02.888816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.482 [2024-11-06 14:38:02.888844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.482 2024/11/06 14:38:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:35.482 [2024-11-06 14:38:02.904705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.482 [2024-11-06 14:38:02.904733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.482 2024/11/06 14:38:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:35.482 [2024-11-06 14:38:02.920899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.482 [2024-11-06 14:38:02.920927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.482 2024/11/06 14:38:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:35.482 [2024-11-06 14:38:02.936647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.482 [2024-11-06 14:38:02.936675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.482 2024/11/06 14:38:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:35.482 [2024-11-06 14:38:02.952070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.482 [2024-11-06 14:38:02.952096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.482 2024/11/06 14:38:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:35.482 [2024-11-06 14:38:02.968214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.482 [2024-11-06 14:38:02.968241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.482 2024/11/06 14:38:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:35.482 [2024-11-06 14:38:02.984130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.482 [2024-11-06 14:38:02.984156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.483 2024/11/06 14:38:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:35.483 [2024-11-06 14:38:02.999865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.483 [2024-11-06 14:38:02.999990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.483 2024/11/06 14:38:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:35.483 [2024-11-06 14:38:03.007372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.483 [2024-11-06 14:38:03.007471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.483 2024/11/06 14:38:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:35.483 [2024-11-06 14:38:03.018769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.483 [2024-11-06 14:38:03.018871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.483 2024/11/06 14:38:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:35.483 [2024-11-06 14:38:03.026163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.483 [2024-11-06 14:38:03.026190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.483 2024/11/06 14:38:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:35.483 [2024-11-06 14:38:03.035277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.483 [2024-11-06 14:38:03.035306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.483 2024/11/06 14:38:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:35.483 [2024-11-06 14:38:03.046108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.483 [2024-11-06 14:38:03.046135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.483 2024/11/06 14:38:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:35.483 [2024-11-06 14:38:03.060582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.483 [2024-11-06 14:38:03.060609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.483 2024/11/06 14:38:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:35.483 15312.00 IOPS, 119.62 MiB/s [2024-11-06T14:38:03.118Z] [2024-11-06 14:38:03.077094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.483 [2024-11-06 14:38:03.077122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.483 2024/11/06 14:38:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:35.483 [2024-11-06 14:38:03.091846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.483 [2024-11-06 14:38:03.091875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.483 2024/11/06 14:38:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:35.483 [2024-11-06 14:38:03.102440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.483 [2024-11-06 14:38:03.102473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.483 2024/11/06 14:38:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:35.483 [2024-11-06 14:38:03.109785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.483 [2024-11-06 14:38:03.109813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.483 2024/11/06 14:38:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:35.742 [2024-11-06 14:38:03.121934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.742 [2024-11-06 14:38:03.121967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.742 2024/11/06 14:38:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:35.742 [2024-11-06 14:38:03.137028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.742 [2024-11-06 14:38:03.137055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.742 2024/11/06 14:38:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:35.742 [2024-11-06 14:38:03.152971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.742 [2024-11-06 14:38:03.153078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.742 2024/11/06 14:38:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:35.742 [2024-11-06 14:38:03.168071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.742 [2024-11-06 14:38:03.168100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.742 2024/11/06 14:38:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:35.742 [2024-11-06 14:38:03.184130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.742 [2024-11-06 14:38:03.184157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.742 2024/11/06 14:38:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:35.742 [2024-11-06 14:38:03.200292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.742 [2024-11-06 14:38:03.200319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.743 2024/11/06 14:38:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:35.743 [2024-11-06 14:38:03.216671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.743 [2024-11-06 14:38:03.216698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.743 2024/11/06 14:38:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:35.743 [2024-11-06 14:38:03.232331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.743 [2024-11-06 14:38:03.232359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.743 2024/11/06 14:38:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:35.743 [2024-11-06 14:38:03.248525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.743 [2024-11-06 14:38:03.248553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.743 2024/11/06 14:38:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:35.743 [2024-11-06 14:38:03.264348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.743 [2024-11-06 14:38:03.264375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.743 2024/11/06 14:38:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:35.743 [2024-11-06 14:38:03.280787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.743 [2024-11-06 14:38:03.280814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.743 2024/11/06 14:38:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:35.743 [2024-11-06 14:38:03.296171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.743 [2024-11-06 14:38:03.296198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.743 2024/11/06 14:38:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:35.743 [2024-11-06 14:38:03.312149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.743 [2024-11-06 14:38:03.312177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.743 2024/11/06 14:38:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:35.743 [2024-11-06 14:38:03.328083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.743 [2024-11-06 14:38:03.328109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.743 2024/11/06 14:38:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:35.743 [2024-11-06 14:38:03.338974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.743 [2024-11-06 14:38:03.339000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.743 2024/11/06 14:38:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:35.743 [2024-11-06 14:38:03.352246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.743 [2024-11-06 14:38:03.352290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.743 2024/11/06 14:38:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:35.743 [2024-11-06 14:38:03.359735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.743 [2024-11-06 14:38:03.359764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.743 2024/11/06 14:38:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:35.743 [2024-11-06 14:38:03.375823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.743 [2024-11-06 14:38:03.375852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.002 2024/11/06 14:38:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:36.002 [2024-11-06 14:38:03.392085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.002 [2024-11-06 14:38:03.392112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.002 2024/11/06 14:38:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:36.002 [2024-11-06 14:38:03.399345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.002 [2024-11-06 14:38:03.399371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.002 2024/11/06 14:38:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:36.002 [2024-11-06 14:38:03.411061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.002 [2024-11-06 14:38:03.411087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.002 2024/11/06 14:38:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:36.002 [2024-11-06 14:38:03.424607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.002 [2024-11-06 14:38:03.424635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.002 2024/11/06 14:38:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:36.002 [2024-11-06 14:38:03.440690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.002 [2024-11-06 14:38:03.440719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.002 2024/11/06 14:38:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:36.002 [2024-11-06 14:38:03.456888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.002 [2024-11-06 14:38:03.456917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.002 2024/11/06 14:38:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:36.002 [2024-11-06 14:38:03.472253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.002 [2024-11-06 14:38:03.472295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.002 2024/11/06 14:38:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:36.002 [2024-11-06 14:38:03.488127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.002 [2024-11-06 14:38:03.488155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.002 2024/11/06 14:38:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:36.002 [2024-11-06 14:38:03.504400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.002 [2024-11-06 14:38:03.504427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.002 2024/11/06 14:38:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:36.002 [2024-11-06 14:38:03.520141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.002 [2024-11-06 14:38:03.520168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.002 2024/11/06 14:38:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:36.002 [2024-11-06 14:38:03.536292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.002 [2024-11-06 14:38:03.536320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.002 2024/11/06 14:38:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:36.002 [2024-11-06 14:38:03.552702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.002 [2024-11-06 14:38:03.552731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.002 2024/11/06 14:38:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:36.003 [2024-11-06 14:38:03.568696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.003 [2024-11-06 14:38:03.568725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.003 2024/11/06 14:38:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:36.003 [2024-11-06 14:38:03.584362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.003 [2024-11-06 14:38:03.584389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.003 2024/11/06 14:38:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:36.003 [2024-11-06 14:38:03.600413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.003 [2024-11-06 14:38:03.600439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.003 2024/11/06 14:38:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:36.003 [2024-11-06 14:38:03.616980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.003 [2024-11-06 14:38:03.617008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.003 2024/11/06 14:38:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:36.003 [2024-11-06 14:38:03.632549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.003 [2024-11-06 14:38:03.632576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.262 2024/11/06 14:38:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:36.262 [2024-11-06 14:38:03.648915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.262 [2024-11-06 14:38:03.648943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.262 2024/11/06 14:38:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:36.262 [2024-11-06 14:38:03.664754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.262 [2024-11-06 14:38:03.664784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.262 2024/11/06 14:38:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:36.262 [2024-11-06 14:38:03.680240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.262 [2024-11-06 14:38:03.680268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.262 2024/11/06 14:38:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:36.262 [2024-11-06 14:38:03.696184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.262 [2024-11-06 14:38:03.696211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.262 2024/11/06 14:38:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:36.262 [2024-11-06 14:38:03.712252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.262 [2024-11-06 14:38:03.712293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.262 2024/11/06 14:38:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:36.262 [2024-11-06 14:38:03.727997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.262 [2024-11-06 14:38:03.728023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.262 2024/11/06 14:38:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:36.262 [2024-11-06 14:38:03.744373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.262 [2024-11-06 14:38:03.744400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.262 2024/11/06 14:38:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:36.262 [2024-11-06 14:38:03.760586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.262 [2024-11-06 14:38:03.760613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.262 2024/11/06 14:38:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:36.262 [2024-11-06 14:38:03.776592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.262 [2024-11-06 14:38:03.776619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.262 2024/11/06 14:38:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:36.262 [2024-11-06 14:38:03.792590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.262 [2024-11-06 14:38:03.792618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.262 2024/11/06 14:38:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:36.262 [2024-11-06 14:38:03.808830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.262 [2024-11-06 14:38:03.808858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.262 2024/11/06 14:38:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:36.262 [2024-11-06 14:38:03.824452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.262 [2024-11-06 14:38:03.824479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.262 2024/11/06 14:38:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:36.262 [2024-11-06 14:38:03.840564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.262 [2024-11-06 14:38:03.840592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.262 2024/11/06 14:38:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:36.262 [2024-11-06 14:38:03.856455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.262 [2024-11-06 14:38:03.856482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.262 2024/11/06 14:38:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:36.262 [2024-11-06 14:38:03.872361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.262 [2024-11-06 14:38:03.872391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.262 2024/11/06 14:38:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:36.262 [2024-11-06 14:38:03.888493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.262 [2024-11-06 14:38:03.888521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.263 2024/11/06 14:38:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:36.521 [2024-11-06 14:38:03.904334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.521 [2024-11-06 14:38:03.904361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.521 2024/11/06 14:38:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:36.521 [2024-11-06 14:38:03.920344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.521 [2024-11-06 14:38:03.920370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.521 2024/11/06 14:38:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:36.521 [2024-11-06 14:38:03.936549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.521 [2024-11-06 14:38:03.936577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.521 2024/11/06 14:38:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:36.521 [2024-11-06 14:38:03.952727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.521 [2024-11-06 14:38:03.952754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.521 2024/11/06 14:38:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:36.521 [2024-11-06 14:38:03.968837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.521 [2024-11-06 14:38:03.968864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.521 2024/11/06 14:38:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:36.521 [2024-11-06 14:38:03.985276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.521 [2024-11-06 14:38:03.985303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.521 2024/11/06 14:38:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:36.521 [2024-11-06 14:38:04.001263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.522 [2024-11-06 14:38:04.001292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.522 2024/11/06 14:38:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:36.522 [2024-11-06 14:38:04.016458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.522 [2024-11-06 14:38:04.016485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.522 2024/11/06 14:38:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:36.522 [2024-11-06 14:38:04.032476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.522 [2024-11-06 14:38:04.032503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.522 2024/11/06 14:38:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:36.522 [2024-11-06 14:38:04.048436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.522 [2024-11-06 14:38:04.048464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.522 2024/11/06 14:38:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:36.522 [2024-11-06 14:38:04.064574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.522 [2024-11-06 14:38:04.064601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.522 2024/11/06 14:38:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:36.522 15345.33 IOPS, 119.89 MiB/s [2024-11-06T14:38:04.157Z] [2024-11-06 14:38:04.080371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.522 [2024-11-06 14:38:04.080399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.522 2024/11/06 14:38:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:36.522 [2024-11-06 14:38:04.096837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.522 [2024-11-06 14:38:04.096865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.522 2024/11/06 14:38:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:36.522 [2024-11-06 14:38:04.111839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.522 [2024-11-06 14:38:04.111867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.522 2024/11/06 14:38:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:36.522 [2024-11-06 14:38:04.122092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.522 [2024-11-06 14:38:04.122119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.522 2024/11/06 14:38:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:36.522 [2024-11-06 14:38:04.136945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.522 [2024-11-06 14:38:04.136985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.522 2024/11/06 14:38:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:36.522 [2024-11-06 14:38:04.152770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.522 [2024-11-06 14:38:04.152879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.781 2024/11/06 14:38:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:36.781 [2024-11-06 14:38:04.168669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.781 [2024-11-06 14:38:04.168769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.781 2024/11/06 14:38:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:36.781 [2024-11-06 14:38:04.184291] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.781 [2024-11-06 14:38:04.184319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.781 2024/11/06 14:38:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:36.781 [2024-11-06 14:38:04.200858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.781 [2024-11-06 14:38:04.200888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.781 2024/11/06 14:38:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:36.781 [2024-11-06 14:38:04.216636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.781 [2024-11-06 14:38:04.216664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.781 2024/11/06 14:38:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:36.781 [2024-11-06 14:38:04.232148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.781 [2024-11-06 14:38:04.232176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.781 2024/11/06 14:38:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:36.781 [2024-11-06 14:38:04.248427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.781 [2024-11-06 14:38:04.248454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.781 2024/11/06 14:38:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:36.781 [2024-11-06 14:38:04.264493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.781 [2024-11-06 14:38:04.264516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.781 2024/11/06 14:38:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:36.781 [2024-11-06 14:38:04.280790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.781 [2024-11-06 14:38:04.280819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.782 2024/11/06 14:38:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:36.782 [2024-11-06 14:38:04.296500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.782 [2024-11-06 14:38:04.296527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.782 2024/11/06 14:38:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:36.782 [2024-11-06 14:38:04.312992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.782 [2024-11-06 14:38:04.313018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.782 2024/11/06 14:38:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:36.782 [2024-11-06 14:38:04.329229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.782 [2024-11-06 14:38:04.329256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.782 2024/11/06 14:38:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:36.782 [2024-11-06 14:38:04.344281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.782 [2024-11-06 14:38:04.344309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.782 2024/11/06 14:38:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:36.782 [2024-11-06 14:38:04.360510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.782 [2024-11-06 14:38:04.360538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.782 2024/11/06 14:38:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:36.782 [2024-11-06 14:38:04.376362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.782 [2024-11-06 14:38:04.376391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.782 2024/11/06 14:38:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:36.782 [2024-11-06 14:38:04.392383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.782 [2024-11-06 14:38:04.392410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.782 2024/11/06 14:38:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:36.782 [2024-11-06 14:38:04.407942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.782 [2024-11-06 14:38:04.407979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.782 2024/11/06 14:38:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:37.041 [2024-11-06 14:38:04.424064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.041 [2024-11-06 14:38:04.424090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.041 2024/11/06 14:38:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:37.041 [2024-11-06 14:38:04.439930] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.041 [2024-11-06 14:38:04.440059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.041 2024/11/06 14:38:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:37.041 [2024-11-06 14:38:04.456138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.041 [2024-11-06 14:38:04.456166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.041 2024/11/06 14:38:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:37.041 [2024-11-06 14:38:04.464422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.041 [2024-11-06 14:38:04.464448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.041 2024/11/06 14:38:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:37.041 [2024-11-06 14:38:04.480449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.041 [2024-11-06 14:38:04.480475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.041 2024/11/06 14:38:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:37.041 [2024-11-06 14:38:04.496792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.041 [2024-11-06 14:38:04.496820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.041 2024/11/06 14:38:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:37.041 [2024-11-06 14:38:04.512243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.041 [2024-11-06 14:38:04.512278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.041 2024/11/06 14:38:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:37.041 [2024-11-06 14:38:04.519433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.041 [2024-11-06 14:38:04.519539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.041 2024/11/06 14:38:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:37.041 [2024-11-06 14:38:04.530909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.041 [2024-11-06 14:38:04.531028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.041 2024/11/06 14:38:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:37.041 [2024-11-06 14:38:04.544513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.041 [2024-11-06 14:38:04.544540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.041 2024/11/06 14:38:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:37.041 [2024-11-06 14:38:04.560730] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.041 [2024-11-06 14:38:04.560759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.041 2024/11/06 14:38:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:37.041 [2024-11-06 14:38:04.576077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.041 [2024-11-06 14:38:04.576102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.041 2024/11/06 14:38:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:37.041 [2024-11-06 14:38:04.583421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.041 [2024-11-06 14:38:04.583448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.041 2024/11/06 14:38:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:37.041 [2024-11-06 14:38:04.595058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.041 [2024-11-06 14:38:04.595084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.042 2024/11/06 14:38:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:37.042 [2024-11-06 14:38:04.608682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.042 [2024-11-06 14:38:04.608709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.042 2024/11/06 14:38:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:37.042 [2024-11-06 14:38:04.625010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.042 [2024-11-06 14:38:04.625036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.042 2024/11/06 14:38:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:37.042 [2024-11-06 14:38:04.641146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.042 [2024-11-06 14:38:04.641174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.042 2024/11/06 14:38:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:37.042 [2024-11-06 14:38:04.656504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.042 [2024-11-06 14:38:04.656532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.042 2024/11/06 14:38:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:37.042 [2024-11-06 14:38:04.672556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.042 [2024-11-06 14:38:04.672585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.301 2024/11/06 14:38:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:37.301 [2024-11-06 14:38:04.688266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.301 [2024-11-06 14:38:04.688301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.301 2024/11/06 14:38:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:37.301 [2024-11-06 14:38:04.704159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.301 [2024-11-06 14:38:04.704187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.301 2024/11/06 14:38:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:37.301 [2024-11-06 14:38:04.720026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.301 [2024-11-06 14:38:04.720053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.301 2024/11/06 14:38:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:37.301 [2024-11-06 14:38:04.736011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.301 [2024-11-06 14:38:04.736038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.301 2024/11/06 14:38:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:37.301 [2024-11-06 14:38:04.743374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.301 [2024-11-06 14:38:04.743400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.301 2024/11/06 14:38:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:37.301 [2024-11-06 14:38:04.755136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.301 [2024-11-06 14:38:04.755163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.301 2024/11/06 14:38:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:37.301 [2024-11-06 14:38:04.768628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.301 [2024-11-06 14:38:04.768655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.301 2024/11/06 14:38:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:37.301 [2024-11-06 14:38:04.785025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.301 [2024-11-06 14:38:04.785052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.301 2024/11/06 14:38:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:37.301 [2024-11-06 14:38:04.800780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.301 [2024-11-06 14:38:04.800807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.301 2024/11/06 14:38:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:37.301 [2024-11-06 14:38:04.816683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.301 [2024-11-06 14:38:04.816710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.301 2024/11/06 14:38:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:37.301 [2024-11-06 14:38:04.832698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.301 [2024-11-06 14:38:04.832724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.301 2024/11/06 14:38:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:37.301 [2024-11-06 14:38:04.848377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.301 [2024-11-06 14:38:04.848403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.301 2024/11/06 14:38:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:37.301 [2024-11-06 14:38:04.864191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.301 [2024-11-06 14:38:04.864219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.301 2024/11/06 14:38:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:37.301 [2024-11-06 14:38:04.880215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.301 [2024-11-06 14:38:04.880242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.301 2024/11/06 14:38:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:37.301 [2024-11-06 14:38:04.896489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.301 [2024-11-06 14:38:04.896516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.301 2024/11/06 14:38:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:37.301 [2024-11-06 14:38:04.912410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.301 [2024-11-06 14:38:04.912437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.301 2024/11/06 14:38:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:37.301 [2024-11-06 14:38:04.928506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.301 [2024-11-06 14:38:04.928534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.301 2024/11/06 14:38:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:37.560 [2024-11-06 14:38:04.944858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.560 [2024-11-06 14:38:04.944886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.560 2024/11/06 14:38:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:37.561 [2024-11-06 14:38:04.960457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.561 [2024-11-06 14:38:04.960484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.561 2024/11/06 14:38:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:37.561 [2024-11-06 14:38:04.976210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.561 [2024-11-06 14:38:04.976236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.561 2024/11/06 14:38:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:37.561 [2024-11-06 14:38:04.991995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.561 [2024-11-06 14:38:04.992021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.561 2024/11/06 14:38:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:37.561 [2024-11-06 14:38:05.008296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.561 [2024-11-06 14:38:05.008324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.561 2024/11/06 14:38:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:37.561 [2024-11-06 14:38:05.024495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.561 [2024-11-06 14:38:05.024522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.561 2024/11/06 14:38:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:37.561 [2024-11-06 14:38:05.040140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.561 [2024-11-06 14:38:05.040167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.561 2024/11/06 14:38:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:37.561 [2024-11-06 14:38:05.056359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.561 [2024-11-06 14:38:05.056387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.561 2024/11/06 14:38:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:37.561 [2024-11-06 14:38:05.072516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.561 [2024-11-06 14:38:05.072544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.561 15352.75 IOPS, 119.94 MiB/s [2024-11-06T14:38:05.196Z] 2024/11/06 14:38:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:37.561 [2024-11-06 14:38:05.088709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.561 [2024-11-06 14:38:05.088737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.561 2024/11/06 14:38:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:37.561 [2024-11-06 14:38:05.104738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.561 [2024-11-06 14:38:05.104767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.561 2024/11/06 14:38:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:37.561 [2024-11-06 14:38:05.121172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.561 [2024-11-06 14:38:05.121198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.561 2024/11/06 14:38:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:37.561 [2024-11-06 14:38:05.136024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.561 [2024-11-06 14:38:05.136050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.561 2024/11/06 14:38:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:37.561 [2024-11-06 14:38:05.152285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.561 [2024-11-06 14:38:05.152312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.561 2024/11/06 14:38:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:37.561 [2024-11-06 14:38:05.168256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.561 [2024-11-06 14:38:05.168291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.561 2024/11/06 14:38:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:37.561 [2024-11-06 14:38:05.183779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.561 [2024-11-06 14:38:05.183808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.561 2024/11/06 14:38:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:37.820 [2024-11-06 14:38:05.194657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.820 [2024-11-06 14:38:05.194769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.820 2024/11/06 14:38:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:37.820 [2024-11-06 14:38:05.208664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.820 [2024-11-06 14:38:05.208692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.820 2024/11/06 14:38:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:37.820 [2024-11-06 14:38:05.224965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.820 [2024-11-06 14:38:05.224990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.820 2024/11/06 14:38:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:37.820 [2024-11-06 14:38:05.240740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.820 [2024-11-06 14:38:05.240838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.820 2024/11/06 14:38:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:37.820 [2024-11-06 14:38:05.256909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.820 [2024-11-06 14:38:05.257014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.820 2024/11/06 14:38:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:37.820 [2024-11-06 14:38:05.273144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.820 [2024-11-06 14:38:05.273171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.820 2024/11/06 14:38:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:37.820 [2024-11-06 14:38:05.287010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.820 [2024-11-06 14:38:05.287036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.820 2024/11/06 14:38:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:37.820 [2024-11-06 14:38:05.299220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.820 [2024-11-06 14:38:05.299245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.820 2024/11/06 14:38:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:37.820 [2024-11-06 14:38:05.312720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.820 [2024-11-06 14:38:05.312823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.820 2024/11/06 14:38:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:37.820 [2024-11-06 14:38:05.328578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.820 [2024-11-06 14:38:05.328605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.820 2024/11/06 14:38:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:37.820 [2024-11-06 14:38:05.344201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.820 [2024-11-06 14:38:05.344227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.820 2024/11/06 14:38:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:37.820 [2024-11-06 14:38:05.360432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.820 [2024-11-06 14:38:05.360459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.820 2024/11/06 14:38:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:37.820 [2024-11-06 14:38:05.376839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.820 [2024-11-06 14:38:05.376866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.821 2024/11/06 14:38:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:37.821 [2024-11-06 14:38:05.392593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.821 [2024-11-06 14:38:05.392619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.821 2024/11/06 14:38:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:37.821 [2024-11-06 14:38:05.408581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.821 [2024-11-06 14:38:05.408607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.821 2024/11/06 14:38:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:37.821 [2024-11-06 14:38:05.423971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.821 [2024-11-06 14:38:05.423997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.821 2024/11/06 14:38:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:37.821 [2024-11-06 14:38:05.434858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.821 [2024-11-06 14:38:05.434884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.821 2024/11/06 14:38:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:37.821 [2024-11-06 14:38:05.448330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.821 [2024-11-06 14:38:05.448357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.821 2024/11/06 14:38:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.080 [2024-11-06 14:38:05.455609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.080 [2024-11-06 14:38:05.455710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.080 2024/11/06 14:38:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.080 [2024-11-06 14:38:05.466117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.080 [2024-11-06 14:38:05.466144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.080 2024/11/06 14:38:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.080 [2024-11-06 14:38:05.480535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.080 [2024-11-06 14:38:05.480562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.080 2024/11/06 14:38:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.080 [2024-11-06 14:38:05.496763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.080 [2024-11-06 14:38:05.496790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.080 2024/11/06 14:38:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.080 [2024-11-06 14:38:05.512230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.080 [2024-11-06 14:38:05.512256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.080 2024/11/06 14:38:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.080 [2024-11-06 14:38:05.528267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.080 [2024-11-06 14:38:05.528305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.080 2024/11/06 14:38:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.080 [2024-11-06 14:38:05.536616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.080 [2024-11-06 14:38:05.536642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.080 2024/11/06 14:38:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.080 [2024-11-06 14:38:05.552292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.080 [2024-11-06 14:38:05.552319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.080 2024/11/06 14:38:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.080 [2024-11-06 14:38:05.568370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.080 [2024-11-06 14:38:05.568397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.080 2024/11/06 14:38:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.080 [2024-11-06 14:38:05.583787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.080 [2024-11-06 14:38:05.583814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.080 2024/11/06 14:38:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.080 [2024-11-06 14:38:05.594990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.080 [2024-11-06 14:38:05.595016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.080 2024/11/06 14:38:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.080 [2024-11-06 14:38:05.608733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.080 [2024-11-06 14:38:05.608759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.080 2024/11/06 14:38:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.080 [2024-11-06 14:38:05.624592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.080 [2024-11-06 14:38:05.624619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.080 2024/11/06 14:38:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.080 [2024-11-06 14:38:05.640393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.080 [2024-11-06 14:38:05.640420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.080 2024/11/06 14:38:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.080 [2024-11-06 14:38:05.656577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.080 [2024-11-06 14:38:05.656604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.080 2024/11/06 14:38:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.080 [2024-11-06 14:38:05.672297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.080 [2024-11-06 14:38:05.672323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.080 2024/11/06 14:38:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.080 [2024-11-06 14:38:05.688403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.080 [2024-11-06 14:38:05.688430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.080 2024/11/06 14:38:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.081 [2024-11-06 14:38:05.704087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.081 [2024-11-06 14:38:05.704113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.081 2024/11/06 14:38:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.340 [2024-11-06 14:38:05.720384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.340 [2024-11-06 14:38:05.720410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.340 2024/11/06 14:38:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.340 [2024-11-06 14:38:05.736644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.340 [2024-11-06 14:38:05.736672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.340 2024/11/06 14:38:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.340 [2024-11-06 14:38:05.752989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.340 [2024-11-06 14:38:05.753014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.340 2024/11/06 14:38:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.340 [2024-11-06 14:38:05.768919] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.340 [2024-11-06 14:38:05.769038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.340 2024/11/06 14:38:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.340 [2024-11-06 14:38:05.785043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.340 [2024-11-06 14:38:05.785068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.340 2024/11/06 14:38:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.340 [2024-11-06 14:38:05.800997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.340 [2024-11-06 14:38:05.801024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.340 2024/11/06 14:38:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.340 [2024-11-06 14:38:05.816909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.340 [2024-11-06 14:38:05.817017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.340 2024/11/06 14:38:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.340 [2024-11-06 14:38:05.832384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.340 [2024-11-06 14:38:05.832479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.340 2024/11/06 14:38:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.340 [2024-11-06 14:38:05.848577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.340 [2024-11-06 14:38:05.848604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.340 2024/11/06 14:38:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.340 [2024-11-06 14:38:05.864624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.340 [2024-11-06 14:38:05.864651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.340 2024/11/06 14:38:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.340 [2024-11-06 14:38:05.879829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.340 [2024-11-06 14:38:05.879926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.340 2024/11/06 14:38:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.340 [2024-11-06 14:38:05.890047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.340 [2024-11-06 14:38:05.890072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.340 2024/11/06 14:38:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.340 [2024-11-06 14:38:05.904710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.340 [2024-11-06 14:38:05.904737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.340 2024/11/06 14:38:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.340 [2024-11-06 14:38:05.920919] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.340 [2024-11-06 14:38:05.920946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.340 2024/11/06 14:38:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.340 [2024-11-06 14:38:05.936796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.340 [2024-11-06 14:38:05.936824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.340 2024/11/06 14:38:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.340 [2024-11-06 14:38:05.952743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.340 [2024-11-06 14:38:05.952771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.340 2024/11/06 14:38:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.340 [2024-11-06 14:38:05.969056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.340 [2024-11-06 14:38:05.969082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.340 2024/11/06 14:38:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.617 [2024-11-06 14:38:05.985039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.617 [2024-11-06 14:38:05.985064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.617 2024/11/06 14:38:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.617 [2024-11-06 14:38:06.000356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.617 [2024-11-06 14:38:06.000382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.617 2024/11/06 14:38:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.617 [2024-11-06 14:38:06.016182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.617 [2024-11-06 14:38:06.016211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.617 2024/11/06 14:38:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.617 [2024-11-06 14:38:06.032017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.617 [2024-11-06 14:38:06.032044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.617 2024/11/06 14:38:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.617 [2024-11-06 14:38:06.047983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.617 [2024-11-06 14:38:06.048008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.617 2024/11/06 14:38:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.617 [2024-11-06 14:38:06.058994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.617 [2024-11-06 14:38:06.059021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.617 2024/11/06 14:38:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.617 [2024-11-06 14:38:06.072438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.617 [2024-11-06 14:38:06.072465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.617 15358.20 IOPS, 119.99 MiB/s [2024-11-06T14:38:06.252Z] 2024/11/06 14:38:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.617 00:33:38.617 Latency(us) 00:33:38.617 [2024-11-06T14:38:06.252Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:38.617 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:33:38.617 Nvme1n1 : 5.01 15361.95 120.02 0.00 0.00 8324.90 2079.51 13812.97 00:33:38.617 [2024-11-06T14:38:06.252Z] =================================================================================================================== 00:33:38.617 [2024-11-06T14:38:06.252Z] Total : 15361.95 120.02 0.00 0.00 8324.90 2079.51 13812.97 00:33:38.617 [2024-11-06 14:38:06.082674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.617 [2024-11-06 14:38:06.082698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.617 2024/11/06 14:38:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.617 [2024-11-06 14:38:06.090558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.617 [2024-11-06 14:38:06.090581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.617 2024/11/06 14:38:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.617 [2024-11-06 14:38:06.098554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.617 [2024-11-06 14:38:06.098577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.617 2024/11/06 14:38:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.617 [2024-11-06 14:38:06.106544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.617 [2024-11-06 14:38:06.106566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.617 2024/11/06 14:38:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.617 [2024-11-06 14:38:06.114555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.617 [2024-11-06 14:38:06.114576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.617 2024/11/06 14:38:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.617 [2024-11-06 14:38:06.122550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.617 [2024-11-06 14:38:06.122571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.617 2024/11/06 14:38:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.617 [2024-11-06 14:38:06.130557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.617 [2024-11-06 14:38:06.130578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.617 2024/11/06 14:38:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.617 [2024-11-06 14:38:06.138557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.617 [2024-11-06 14:38:06.138578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.617 2024/11/06 14:38:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.617 [2024-11-06 14:38:06.146561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.617 [2024-11-06 14:38:06.146583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.618 2024/11/06 14:38:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.618 [2024-11-06 14:38:06.154542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.618 [2024-11-06 14:38:06.154563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.618 2024/11/06 14:38:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.618 [2024-11-06 14:38:06.162554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.618 [2024-11-06 14:38:06.162576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.618 2024/11/06 14:38:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.618 [2024-11-06 14:38:06.170541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.618 [2024-11-06 14:38:06.170562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.618 2024/11/06 14:38:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.618 [2024-11-06 14:38:06.178552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.618 [2024-11-06 14:38:06.178573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.618 2024/11/06 14:38:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.618 [2024-11-06 14:38:06.186560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.618 [2024-11-06 14:38:06.186581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.618 2024/11/06 14:38:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.618 [2024-11-06 14:38:06.194544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.618 [2024-11-06 14:38:06.194566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.618 2024/11/06 14:38:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.618 [2024-11-06 14:38:06.202551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.618 [2024-11-06 14:38:06.202572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.618 2024/11/06 14:38:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.618 [2024-11-06 14:38:06.210550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.618 [2024-11-06 14:38:06.210572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.618 2024/11/06 14:38:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.618 [2024-11-06 14:38:06.218540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.618 [2024-11-06 14:38:06.218561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.618 2024/11/06 14:38:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.618 [2024-11-06 14:38:06.226553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.618 [2024-11-06 14:38:06.226574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.618 2024/11/06 14:38:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.618 [2024-11-06 14:38:06.234543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.618 [2024-11-06 14:38:06.234565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.618 2024/11/06 14:38:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.890 [2024-11-06 14:38:06.242561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.891 [2024-11-06 14:38:06.242583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.891 2024/11/06 14:38:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.891 [2024-11-06 14:38:06.250553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.891 [2024-11-06 14:38:06.250574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.891 2024/11/06 14:38:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.891 [2024-11-06 14:38:06.258542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.891 [2024-11-06 14:38:06.258563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.891 2024/11/06 14:38:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.891 [2024-11-06 14:38:06.266555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.891 [2024-11-06 14:38:06.266576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.891 2024/11/06 14:38:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.891 [2024-11-06 14:38:06.274552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.891 [2024-11-06 14:38:06.274573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.891 2024/11/06 14:38:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.891 [2024-11-06 14:38:06.282556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.891 [2024-11-06 14:38:06.282578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.891 2024/11/06 14:38:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.891 [2024-11-06 14:38:06.290550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.891 [2024-11-06 14:38:06.290571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.891 2024/11/06 14:38:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.891 [2024-11-06 14:38:06.298541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.891 [2024-11-06 14:38:06.298569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.891 2024/11/06 14:38:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.891 [2024-11-06 14:38:06.306559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.891 [2024-11-06 14:38:06.306581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.891 2024/11/06 14:38:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.891 [2024-11-06 14:38:06.314554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.891 [2024-11-06 14:38:06.314575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.891 2024/11/06 14:38:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.891 [2024-11-06 14:38:06.322545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.891 [2024-11-06 14:38:06.322566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.891 2024/11/06 14:38:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.891 [2024-11-06 14:38:06.330552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.891 [2024-11-06 14:38:06.330574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.891 2024/11/06 14:38:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.891 [2024-11-06 14:38:06.338564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.891 [2024-11-06 14:38:06.338585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.891 2024/11/06 14:38:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.891 [2024-11-06 14:38:06.346541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.891 [2024-11-06 14:38:06.346561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.891 2024/11/06 14:38:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.891 [2024-11-06 14:38:06.354551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.891 [2024-11-06 14:38:06.354573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.891 2024/11/06 14:38:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.891 [2024-11-06 14:38:06.362542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.891 [2024-11-06 14:38:06.362563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.891 2024/11/06 14:38:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.891 [2024-11-06 14:38:06.370557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.891 [2024-11-06 14:38:06.370579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.891 2024/11/06 14:38:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.891 [2024-11-06 14:38:06.378552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.891 [2024-11-06 14:38:06.378573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.891 2024/11/06 14:38:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.891 [2024-11-06 14:38:06.386539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.891 [2024-11-06 14:38:06.386560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.891 2024/11/06 14:38:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.891 [2024-11-06 14:38:06.394551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.891 [2024-11-06 14:38:06.394573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.891 2024/11/06 14:38:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.891 [2024-11-06 14:38:06.402554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.891 [2024-11-06 14:38:06.402575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.891 2024/11/06 14:38:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.891 [2024-11-06 14:38:06.410542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.891 [2024-11-06 14:38:06.410563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.891 2024/11/06 14:38:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.891 [2024-11-06 14:38:06.418549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.891 [2024-11-06 14:38:06.418570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.891 2024/11/06 14:38:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.891 [2024-11-06 14:38:06.426541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.891 [2024-11-06 14:38:06.426561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.891 2024/11/06 14:38:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.891 [2024-11-06 14:38:06.434558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.891 [2024-11-06 14:38:06.434579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.892 2024/11/06 14:38:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.892 [2024-11-06 14:38:06.442551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.892 [2024-11-06 14:38:06.442573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.892 2024/11/06 14:38:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.892 [2024-11-06 14:38:06.450541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.892 [2024-11-06 14:38:06.450562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.892 2024/11/06 14:38:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.892 [2024-11-06 14:38:06.458550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.892 [2024-11-06 14:38:06.458571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.892 2024/11/06 14:38:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.892 [2024-11-06 14:38:06.466553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.892 [2024-11-06 14:38:06.466576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.892 2024/11/06 14:38:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.892 [2024-11-06 14:38:06.474541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.892 [2024-11-06 14:38:06.474562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.892 2024/11/06 14:38:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.892 [2024-11-06 14:38:06.482574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.892 [2024-11-06 14:38:06.482595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.892 2024/11/06 14:38:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.892 [2024-11-06 14:38:06.490557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.892 [2024-11-06 14:38:06.490579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.892 2024/11/06 14:38:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.892 [2024-11-06 14:38:06.498553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.892 [2024-11-06 14:38:06.498574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.892 2024/11/06 14:38:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.892 [2024-11-06 14:38:06.506551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.892 [2024-11-06 14:38:06.506572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.892 2024/11/06 14:38:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.892 [2024-11-06 14:38:06.514541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.892 [2024-11-06 14:38:06.514562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.892 2024/11/06 14:38:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:38.892 [2024-11-06 14:38:06.522553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.892 [2024-11-06 14:38:06.522577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.151 2024/11/06 14:38:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:39.151 [2024-11-06 14:38:06.530559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.151 [2024-11-06 14:38:06.530581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.151 2024/11/06 14:38:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:39.151 [2024-11-06 14:38:06.538543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.151 [2024-11-06 14:38:06.538563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.151 2024/11/06 14:38:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:39.151 [2024-11-06 14:38:06.546553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.151 [2024-11-06 14:38:06.546573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.151 2024/11/06 14:38:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:39.151 [2024-11-06 14:38:06.554546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.151 [2024-11-06 14:38:06.554566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.151 2024/11/06 14:38:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:39.151 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (114739) - No such process 00:33:39.151 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 114739 00:33:39.151 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:39.151 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.151 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:39.151 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.151 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:33:39.151 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.151 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:39.151 delay0 00:33:39.151 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.151 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:33:39.151 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.151 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:39.151 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.151 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:33:39.151 [2024-11-06 14:38:06.772846] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:33:47.265 Initializing NVMe Controllers 00:33:47.265 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:33:47.265 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:47.265 Initialization complete. Launching workers. 00:33:47.265 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 238, failed: 29272 00:33:47.265 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 29386, failed to submit 124 00:33:47.265 success 29333, unsuccessful 53, failed 0 00:33:47.265 14:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:33:47.265 14:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:33:47.265 14:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:47.265 14:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:33:47.265 14:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:47.265 14:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:33:47.265 14:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:47.265 14:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:47.265 rmmod nvme_tcp 00:33:47.265 rmmod nvme_fabrics 00:33:47.265 rmmod nvme_keyring 00:33:47.265 14:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:47.265 14:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:33:47.265 14:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:33:47.265 14:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 114569 ']' 00:33:47.265 14:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 114569 00:33:47.265 14:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 114569 ']' 00:33:47.265 14:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 114569 00:33:47.265 14:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:33:47.265 14:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:47.265 14:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 114569 00:33:47.265 killing process with pid 114569 00:33:47.265 14:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:33:47.265 14:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:33:47.265 14:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 114569' 00:33:47.265 14:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 114569 00:33:47.265 14:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 114569 00:33:47.265 14:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:47.265 14:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:47.265 14:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:47.265 14:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:33:47.265 14:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:33:47.265 14:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:47.265 14:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:33:47.265 14:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:47.265 14:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:33:47.265 14:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:33:47.265 14:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:33:47.265 14:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:33:47.265 14:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:33:47.265 14:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:33:47.265 14:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:33:47.265 14:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:33:47.265 14:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:33:47.265 14:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:33:47.265 14:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:33:47.266 14:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:33:47.266 14:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:33:47.266 14:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:33:47.266 14:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:33:47.266 14:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:47.266 14:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:47.266 14:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:47.266 14:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:33:47.266 00:33:47.266 real 0m27.267s 00:33:47.266 user 0m42.163s 00:33:47.266 sys 0m7.729s 00:33:47.266 14:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:47.266 ************************************ 00:33:47.266 END TEST nvmf_zcopy 00:33:47.266 ************************************ 00:33:47.266 14:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:47.266 14:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:33:47.266 14:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:33:47.266 14:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:47.266 14:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:47.528 ************************************ 00:33:47.528 START TEST nvmf_nmic 00:33:47.528 ************************************ 00:33:47.528 14:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:33:47.528 * Looking for test storage... 00:33:47.528 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:33:47.528 14:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:47.528 14:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:33:47.528 14:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:47.528 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:47.528 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:47.528 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:47.528 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:47.528 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:33:47.528 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:33:47.528 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:33:47.528 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:33:47.528 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:33:47.528 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:33:47.528 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:33:47.528 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:47.528 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:33:47.528 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:33:47.528 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:47.528 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:47.528 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:33:47.528 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:33:47.528 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:47.528 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:33:47.528 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:33:47.528 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:33:47.528 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:33:47.528 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:47.528 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:33:47.528 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:33:47.528 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:47.528 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:47.528 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:33:47.528 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:47.528 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:47.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:47.528 --rc genhtml_branch_coverage=1 00:33:47.528 --rc genhtml_function_coverage=1 00:33:47.528 --rc genhtml_legend=1 00:33:47.528 --rc geninfo_all_blocks=1 00:33:47.528 --rc geninfo_unexecuted_blocks=1 00:33:47.528 00:33:47.528 ' 00:33:47.528 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:47.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:47.528 --rc genhtml_branch_coverage=1 00:33:47.528 --rc genhtml_function_coverage=1 00:33:47.528 --rc genhtml_legend=1 00:33:47.528 --rc geninfo_all_blocks=1 00:33:47.528 --rc geninfo_unexecuted_blocks=1 00:33:47.528 00:33:47.528 ' 00:33:47.528 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:47.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:47.529 --rc genhtml_branch_coverage=1 00:33:47.529 --rc genhtml_function_coverage=1 00:33:47.529 --rc genhtml_legend=1 00:33:47.529 --rc geninfo_all_blocks=1 00:33:47.529 --rc geninfo_unexecuted_blocks=1 00:33:47.529 00:33:47.529 ' 00:33:47.529 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:47.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:47.529 --rc genhtml_branch_coverage=1 00:33:47.529 --rc genhtml_function_coverage=1 00:33:47.529 --rc genhtml_legend=1 00:33:47.529 --rc geninfo_all_blocks=1 00:33:47.529 --rc geninfo_unexecuted_blocks=1 00:33:47.529 00:33:47.529 ' 00:33:47.529 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:33:47.529 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:33:47.529 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:47.529 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:47.529 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:47.529 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:47.529 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:47.529 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:47.529 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:47.529 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:47.529 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:47.529 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:47.529 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:33:47.529 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:33:47.529 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:47.529 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:47.529 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:33:47.529 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:47.529 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:47.529 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:33:47.529 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:47.529 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:47.529 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:47.529 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:47.529 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:47.529 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:47.529 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:33:47.529 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:47.529 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:33:47.529 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:47.529 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:47.529 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:47.529 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:47.529 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:47.529 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:47.529 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:47.529 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:47.529 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:47.529 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:47.529 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:47.529 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:47.529 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:33:47.529 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:47.529 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:47.529 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:47.529 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:47.529 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:47.529 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:47.529 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:47.529 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:47.529 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:33:47.529 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:33:47.529 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:33:47.529 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:33:47.529 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:33:47.529 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:33:47.529 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:47.529 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:33:47.529 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:33:47.529 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:33:47.529 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:47.529 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:33:47.530 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:33:47.530 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:33:47.530 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:33:47.530 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:33:47.530 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:33:47.530 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:47.530 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:33:47.530 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:33:47.530 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:33:47.530 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:33:47.530 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:33:47.530 Cannot find device "nvmf_init_br" 00:33:47.530 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:33:47.530 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:33:47.530 Cannot find device "nvmf_init_br2" 00:33:47.530 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:33:47.530 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:33:47.530 Cannot find device "nvmf_tgt_br" 00:33:47.530 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:33:47.530 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:33:47.530 Cannot find device "nvmf_tgt_br2" 00:33:47.530 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:33:47.530 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:33:47.530 Cannot find device "nvmf_init_br" 00:33:47.530 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:33:47.530 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:33:47.530 Cannot find device "nvmf_init_br2" 00:33:47.530 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:33:47.530 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:33:47.530 Cannot find device "nvmf_tgt_br" 00:33:47.530 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:33:47.530 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:33:47.792 Cannot find device "nvmf_tgt_br2" 00:33:47.792 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:33:47.792 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:33:47.792 Cannot find device "nvmf_br" 00:33:47.792 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:33:47.792 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:33:47.792 Cannot find device "nvmf_init_if" 00:33:47.792 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:33:47.793 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:33:47.793 Cannot find device "nvmf_init_if2" 00:33:47.793 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:33:47.793 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:33:47.793 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:47.793 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:33:47.793 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:33:47.793 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:47.793 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:33:47.793 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:33:47.793 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:33:47.793 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:33:47.793 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:33:47.793 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:33:47.793 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:33:47.793 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:33:47.793 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:33:47.793 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:33:47.793 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:33:47.793 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:33:47.793 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:33:47.793 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:33:47.793 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:33:47.793 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:33:47.793 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:33:47.793 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:33:47.793 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:33:47.793 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:33:47.793 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:33:47.793 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:33:47.793 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:33:47.793 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:33:47.793 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:33:47.793 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:33:47.793 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:33:47.793 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:33:47.793 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:33:47.793 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:33:47.793 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:33:47.793 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:33:47.793 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:33:47.793 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:33:47.793 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:33:47.793 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:33:47.793 00:33:47.793 --- 10.0.0.3 ping statistics --- 00:33:47.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:47.793 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:33:47.793 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:33:47.793 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:33:47.793 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.078 ms 00:33:47.793 00:33:47.793 --- 10.0.0.4 ping statistics --- 00:33:47.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:47.793 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:33:47.793 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:33:47.793 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:47.793 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:33:48.055 00:33:48.055 --- 10.0.0.1 ping statistics --- 00:33:48.055 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:48.055 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:33:48.055 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:33:48.055 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:48.055 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:33:48.055 00:33:48.055 --- 10.0.0.2 ping statistics --- 00:33:48.055 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:48.055 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:33:48.055 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:48.055 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:33:48.055 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:48.055 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:48.055 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:48.055 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:48.055 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:48.055 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:48.055 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:48.055 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:33:48.055 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:48.055 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:48.055 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:48.055 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=115126 00:33:48.055 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:33:48.055 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 115126 00:33:48.055 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 115126 ']' 00:33:48.055 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:48.055 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:48.055 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:48.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:48.055 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:48.055 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:48.055 [2024-11-06 14:38:15.563524] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:48.056 [2024-11-06 14:38:15.566278] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:33:48.056 [2024-11-06 14:38:15.567101] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:48.316 [2024-11-06 14:38:15.744983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:48.316 [2024-11-06 14:38:15.887930] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:48.316 [2024-11-06 14:38:15.888011] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:48.316 [2024-11-06 14:38:15.888023] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:48.316 [2024-11-06 14:38:15.888034] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:48.316 [2024-11-06 14:38:15.888042] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:48.316 [2024-11-06 14:38:15.890236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:48.316 [2024-11-06 14:38:15.891055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:48.316 [2024-11-06 14:38:15.891314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:48.316 [2024-11-06 14:38:15.891315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:48.577 [2024-11-06 14:38:16.190225] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:48.577 [2024-11-06 14:38:16.191450] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:48.577 [2024-11-06 14:38:16.192129] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:48.577 [2024-11-06 14:38:16.192484] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:48.577 [2024-11-06 14:38:16.192500] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:48.838 14:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:48.838 14:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:33:48.838 14:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:48.838 14:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:48.838 14:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:48.838 14:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:48.838 14:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:48.838 14:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:48.838 14:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:49.100 [2024-11-06 14:38:16.472644] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:49.100 14:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.100 14:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:49.100 14:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.100 14:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:49.100 Malloc0 00:33:49.100 14:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.100 14:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:33:49.100 14:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.100 14:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:49.100 14:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.100 14:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:49.100 14:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.100 14:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:49.100 14:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.100 14:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:33:49.100 14:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.100 14:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:49.100 [2024-11-06 14:38:16.592870] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:33:49.100 test case1: single bdev can't be used in multiple subsystems 00:33:49.100 14:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.100 14:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:33:49.100 14:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:33:49.100 14:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.100 14:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:49.100 14:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.100 14:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:33:49.100 14:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.100 14:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:49.100 14:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.100 14:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:33:49.100 14:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:33:49.100 14:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.100 14:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:49.100 [2024-11-06 14:38:16.616417] bdev.c:8198:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:33:49.100 [2024-11-06 14:38:16.616476] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:33:49.100 [2024-11-06 14:38:16.616495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:49.100 2024/11/06 14:38:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:49.100 request: 00:33:49.100 { 00:33:49.100 "method": "nvmf_subsystem_add_ns", 00:33:49.100 "params": { 00:33:49.100 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:33:49.100 "namespace": { 00:33:49.100 "bdev_name": "Malloc0", 00:33:49.100 "no_auto_visible": false 00:33:49.100 } 00:33:49.100 } 00:33:49.100 } 00:33:49.101 Got JSON-RPC error response 00:33:49.101 GoRPCClient: error on JSON-RPC call 00:33:49.101 14:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:33:49.101 14:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:33:49.101 Adding namespace failed - expected result. 00:33:49.101 14:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:33:49.101 14:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:33:49.101 test case2: host connect to nvmf target in multiple paths 00:33:49.101 14:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:33:49.101 14:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:33:49.101 14:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.101 14:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:49.101 [2024-11-06 14:38:16.628601] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:33:49.101 14:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.101 14:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid=9ca3beaa-4765-4780-96a0-c04fd97e7435 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:33:49.101 14:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid=9ca3beaa-4765-4780-96a0-c04fd97e7435 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:33:49.362 14:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:33:49.362 14:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:33:49.362 14:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:33:49.362 14:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:33:49.362 14:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:33:51.266 14:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:33:51.266 14:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:33:51.266 14:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:33:51.266 14:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:33:51.266 14:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:33:51.266 14:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:33:51.266 14:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:33:51.266 [global] 00:33:51.266 thread=1 00:33:51.266 invalidate=1 00:33:51.266 rw=write 00:33:51.266 time_based=1 00:33:51.266 runtime=1 00:33:51.266 ioengine=libaio 00:33:51.266 direct=1 00:33:51.266 bs=4096 00:33:51.266 iodepth=1 00:33:51.266 norandommap=0 00:33:51.266 numjobs=1 00:33:51.266 00:33:51.266 verify_dump=1 00:33:51.266 verify_backlog=512 00:33:51.266 verify_state_save=0 00:33:51.266 do_verify=1 00:33:51.266 verify=crc32c-intel 00:33:51.266 [job0] 00:33:51.266 filename=/dev/nvme0n1 00:33:51.266 Could not set queue depth (nvme0n1) 00:33:51.524 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:51.524 fio-3.35 00:33:51.524 Starting 1 thread 00:33:52.898 00:33:52.898 job0: (groupid=0, jobs=1): err= 0: pid=115232: Wed Nov 6 14:38:20 2024 00:33:52.898 read: IOPS=4353, BW=17.0MiB/s (17.8MB/s)(17.0MiB/1001msec) 00:33:52.898 slat (nsec): min=6212, max=26188, avg=6757.53, stdev=1268.29 00:33:52.898 clat (usec): min=100, max=271, avg=118.91, stdev= 7.85 00:33:52.898 lat (usec): min=106, max=278, avg=125.67, stdev= 7.96 00:33:52.898 clat percentiles (usec): 00:33:52.898 | 1.00th=[ 108], 5.00th=[ 111], 10.00th=[ 112], 20.00th=[ 114], 00:33:52.898 | 30.00th=[ 116], 40.00th=[ 117], 50.00th=[ 118], 60.00th=[ 120], 00:33:52.898 | 70.00th=[ 122], 80.00th=[ 124], 90.00th=[ 129], 95.00th=[ 133], 00:33:52.898 | 99.00th=[ 139], 99.50th=[ 143], 99.90th=[ 208], 99.95th=[ 235], 00:33:52.898 | 99.99th=[ 273] 00:33:52.898 write: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec); 0 zone resets 00:33:52.898 slat (usec): min=9, max=100, avg=10.89, stdev= 3.29 00:33:52.898 clat (usec): min=71, max=241, avg=85.69, stdev= 7.55 00:33:52.898 lat (usec): min=81, max=301, avg=96.57, stdev= 9.20 00:33:52.898 clat percentiles (usec): 00:33:52.898 | 1.00th=[ 76], 5.00th=[ 79], 10.00th=[ 80], 20.00th=[ 82], 00:33:52.898 | 30.00th=[ 83], 40.00th=[ 84], 50.00th=[ 85], 60.00th=[ 86], 00:33:52.898 | 70.00th=[ 87], 80.00th=[ 90], 90.00th=[ 95], 95.00th=[ 98], 00:33:52.898 | 99.00th=[ 108], 99.50th=[ 113], 99.90th=[ 133], 99.95th=[ 227], 00:33:52.898 | 99.99th=[ 243] 00:33:52.898 bw ( KiB/s): min=19512, max=19512, per=100.00%, avg=19512.00, stdev= 0.00, samples=1 00:33:52.898 iops : min= 4878, max= 4878, avg=4878.00, stdev= 0.00, samples=1 00:33:52.898 lat (usec) : 100=49.61%, 250=50.38%, 500=0.01% 00:33:52.898 cpu : usr=0.90%, sys=6.50%, ctx=8967, majf=0, minf=5 00:33:52.898 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:52.898 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.898 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.898 issued rwts: total=4358,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.898 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:52.898 00:33:52.898 Run status group 0 (all jobs): 00:33:52.898 READ: bw=17.0MiB/s (17.8MB/s), 17.0MiB/s-17.0MiB/s (17.8MB/s-17.8MB/s), io=17.0MiB (17.8MB), run=1001-1001msec 00:33:52.898 WRITE: bw=18.0MiB/s (18.9MB/s), 18.0MiB/s-18.0MiB/s (18.9MB/s-18.9MB/s), io=18.0MiB (18.9MB), run=1001-1001msec 00:33:52.898 00:33:52.898 Disk stats (read/write): 00:33:52.898 nvme0n1: ios=4014/4096, merge=0/0, ticks=504/370, in_queue=874, util=91.28% 00:33:52.898 14:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:33:52.898 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:33:52.898 14:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:33:52.898 14:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:33:52.898 14:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:33:52.898 14:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:52.898 14:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:52.898 14:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:33:52.898 14:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:33:52.898 14:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:33:52.898 14:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:33:52.898 14:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:52.898 14:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:33:52.898 14:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:52.898 14:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:33:52.898 14:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:52.898 14:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:52.898 rmmod nvme_tcp 00:33:52.898 rmmod nvme_fabrics 00:33:52.898 rmmod nvme_keyring 00:33:52.898 14:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:52.898 14:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:33:52.898 14:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:33:52.898 14:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 115126 ']' 00:33:52.898 14:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 115126 00:33:52.899 14:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 115126 ']' 00:33:52.899 14:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 115126 00:33:52.899 14:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:33:52.899 14:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:52.899 14:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 115126 00:33:52.899 14:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:33:52.899 killing process with pid 115126 00:33:52.899 14:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:33:52.899 14:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 115126' 00:33:52.899 14:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 115126 00:33:52.899 14:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 115126 00:33:53.839 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:53.839 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:53.839 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:53.839 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:33:53.839 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:33:53.839 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:53.839 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:33:53.839 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:53.839 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:33:53.839 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:33:53.839 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:33:53.839 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:33:53.839 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:33:53.839 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:33:53.839 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:33:53.839 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:33:53.839 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:33:53.839 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:33:53.839 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:33:53.839 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:33:53.839 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:33:53.839 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:33:53.839 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:33:53.839 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:53.839 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:53.839 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:53.839 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:33:53.839 00:33:53.839 real 0m6.555s 00:33:53.839 user 0m16.724s 00:33:53.839 sys 0m1.814s 00:33:53.839 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:53.840 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:53.840 ************************************ 00:33:53.840 END TEST nvmf_nmic 00:33:53.840 ************************************ 00:33:54.102 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:33:54.102 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:33:54.102 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:54.102 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:54.102 ************************************ 00:33:54.102 START TEST nvmf_fio_target 00:33:54.102 ************************************ 00:33:54.102 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:33:54.102 * Looking for test storage... 00:33:54.102 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:33:54.102 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:54.102 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:33:54.102 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:54.102 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:54.102 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:54.102 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:54.102 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:54.102 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:33:54.102 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:33:54.102 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:33:54.102 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:33:54.102 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:33:54.102 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:33:54.102 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:33:54.102 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:54.102 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:33:54.102 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:33:54.102 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:54.102 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:54.102 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:33:54.102 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:33:54.102 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:54.102 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:33:54.102 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:33:54.102 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:33:54.102 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:33:54.102 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:54.102 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:33:54.102 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:33:54.102 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:54.102 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:54.102 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:33:54.102 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:54.102 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:54.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:54.102 --rc genhtml_branch_coverage=1 00:33:54.102 --rc genhtml_function_coverage=1 00:33:54.102 --rc genhtml_legend=1 00:33:54.102 --rc geninfo_all_blocks=1 00:33:54.102 --rc geninfo_unexecuted_blocks=1 00:33:54.102 00:33:54.102 ' 00:33:54.102 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:54.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:54.102 --rc genhtml_branch_coverage=1 00:33:54.102 --rc genhtml_function_coverage=1 00:33:54.102 --rc genhtml_legend=1 00:33:54.102 --rc geninfo_all_blocks=1 00:33:54.102 --rc geninfo_unexecuted_blocks=1 00:33:54.102 00:33:54.102 ' 00:33:54.102 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:54.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:54.102 --rc genhtml_branch_coverage=1 00:33:54.102 --rc genhtml_function_coverage=1 00:33:54.102 --rc genhtml_legend=1 00:33:54.102 --rc geninfo_all_blocks=1 00:33:54.102 --rc geninfo_unexecuted_blocks=1 00:33:54.102 00:33:54.102 ' 00:33:54.102 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:54.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:54.102 --rc genhtml_branch_coverage=1 00:33:54.102 --rc genhtml_function_coverage=1 00:33:54.102 --rc genhtml_legend=1 00:33:54.102 --rc geninfo_all_blocks=1 00:33:54.102 --rc geninfo_unexecuted_blocks=1 00:33:54.102 00:33:54.102 ' 00:33:54.102 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:33:54.102 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:33:54.102 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:54.102 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:54.102 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:54.102 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:54.102 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:54.102 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:54.102 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:54.102 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:54.102 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:54.102 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:54.102 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:33:54.102 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:33:54.102 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:54.102 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:54.102 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:33:54.102 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:54.102 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:54.102 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:33:54.102 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:54.102 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:54.102 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:54.102 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:54.102 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:54.103 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:54.103 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:33:54.103 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:54.103 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:33:54.103 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:54.103 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:54.103 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:54.103 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:54.103 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:54.103 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:54.103 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:54.103 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:54.103 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:54.103 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:54.103 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:54.103 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:54.103 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:54.103 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:33:54.103 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:54.103 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:54.103 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:54.103 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:54.103 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:54.103 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:54.103 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:54.103 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:54.103 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:33:54.103 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:33:54.103 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:33:54.103 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:33:54.103 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:33:54.103 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:33:54.103 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:54.103 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:33:54.103 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:33:54.103 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:33:54.103 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:54.103 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:33:54.103 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:33:54.103 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:33:54.103 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:33:54.103 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:33:54.103 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:33:54.103 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:54.103 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:33:54.103 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:33:54.103 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:33:54.103 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:33:54.103 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:33:54.103 Cannot find device "nvmf_init_br" 00:33:54.103 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:33:54.103 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:33:54.103 Cannot find device "nvmf_init_br2" 00:33:54.103 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:33:54.103 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:33:54.365 Cannot find device "nvmf_tgt_br" 00:33:54.365 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:33:54.365 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:33:54.365 Cannot find device "nvmf_tgt_br2" 00:33:54.365 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:33:54.365 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:33:54.365 Cannot find device "nvmf_init_br" 00:33:54.365 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:33:54.365 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:33:54.365 Cannot find device "nvmf_init_br2" 00:33:54.365 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:33:54.365 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:33:54.365 Cannot find device "nvmf_tgt_br" 00:33:54.365 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:33:54.365 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:33:54.365 Cannot find device "nvmf_tgt_br2" 00:33:54.365 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:33:54.365 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:33:54.365 Cannot find device "nvmf_br" 00:33:54.365 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:33:54.365 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:33:54.365 Cannot find device "nvmf_init_if" 00:33:54.365 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:33:54.365 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:33:54.365 Cannot find device "nvmf_init_if2" 00:33:54.365 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:33:54.365 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:33:54.365 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:54.365 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:33:54.365 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:33:54.365 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:54.365 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:33:54.365 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:33:54.365 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:33:54.365 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:33:54.365 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:33:54.365 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:33:54.365 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:33:54.365 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:33:54.365 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:33:54.365 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:33:54.365 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:33:54.365 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:33:54.365 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:33:54.365 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:33:54.365 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:33:54.365 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:33:54.365 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:33:54.365 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:33:54.365 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:33:54.365 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:33:54.365 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:33:54.365 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:33:54.365 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:33:54.365 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:33:54.365 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:33:54.365 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:33:54.365 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:33:54.365 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:33:54.365 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:33:54.625 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:33:54.625 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:33:54.625 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:33:54.625 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:33:54.625 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:33:54.625 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:33:54.625 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.092 ms 00:33:54.625 00:33:54.625 --- 10.0.0.3 ping statistics --- 00:33:54.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:54.625 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:33:54.625 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:33:54.625 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:33:54.625 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:33:54.625 00:33:54.625 --- 10.0.0.4 ping statistics --- 00:33:54.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:54.625 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:33:54.625 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:33:54.625 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:54.625 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:33:54.625 00:33:54.625 --- 10.0.0.1 ping statistics --- 00:33:54.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:54.625 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:33:54.625 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:33:54.625 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:54.625 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:33:54.625 00:33:54.625 --- 10.0.0.2 ping statistics --- 00:33:54.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:54.625 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:33:54.625 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:54.625 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:33:54.625 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:54.625 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:54.625 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:54.625 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:54.625 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:54.625 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:54.625 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:54.625 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:33:54.625 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:54.625 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:54.625 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:54.625 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=115472 00:33:54.625 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:33:54.625 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 115472 00:33:54.625 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 115472 ']' 00:33:54.625 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:54.625 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:54.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:54.625 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:54.625 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:54.625 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:54.625 [2024-11-06 14:38:22.127543] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:54.625 [2024-11-06 14:38:22.129646] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:33:54.625 [2024-11-06 14:38:22.130237] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:54.886 [2024-11-06 14:38:22.289635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:54.886 [2024-11-06 14:38:22.409324] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:54.886 [2024-11-06 14:38:22.409372] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:54.886 [2024-11-06 14:38:22.409382] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:54.886 [2024-11-06 14:38:22.409392] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:54.886 [2024-11-06 14:38:22.409400] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:54.886 [2024-11-06 14:38:22.411090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:54.886 [2024-11-06 14:38:22.411829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:54.886 [2024-11-06 14:38:22.412153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:54.886 [2024-11-06 14:38:22.412257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:55.146 [2024-11-06 14:38:22.663520] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:55.146 [2024-11-06 14:38:22.663752] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:55.146 [2024-11-06 14:38:22.664629] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:55.146 [2024-11-06 14:38:22.664683] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:55.146 [2024-11-06 14:38:22.664984] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:55.719 14:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:55.719 14:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:33:55.719 14:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:55.719 14:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:55.719 14:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:55.719 14:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:55.719 14:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:55.719 [2024-11-06 14:38:23.305527] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:55.980 14:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:56.241 14:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:33:56.241 14:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:56.511 14:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:33:56.511 14:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:56.789 14:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:33:56.789 14:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:57.050 14:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:33:57.050 14:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:33:57.311 14:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:57.572 14:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:33:57.572 14:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:57.830 14:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:33:57.830 14:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:58.090 14:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:33:58.090 14:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:33:58.351 14:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:33:58.351 14:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:33:58.351 14:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:58.611 14:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:33:58.611 14:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:33:58.871 14:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:33:59.131 [2024-11-06 14:38:26.621610] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:33:59.131 14:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:33:59.391 14:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:33:59.650 14:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid=9ca3beaa-4765-4780-96a0-c04fd97e7435 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:33:59.650 14:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:33:59.650 14:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:33:59.650 14:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:33:59.650 14:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:33:59.650 14:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:33:59.650 14:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:34:01.552 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:34:01.552 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:34:01.552 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:34:01.811 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:34:01.811 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:34:01.811 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:34:01.811 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:34:01.811 [global] 00:34:01.811 thread=1 00:34:01.811 invalidate=1 00:34:01.811 rw=write 00:34:01.811 time_based=1 00:34:01.811 runtime=1 00:34:01.811 ioengine=libaio 00:34:01.811 direct=1 00:34:01.811 bs=4096 00:34:01.811 iodepth=1 00:34:01.811 norandommap=0 00:34:01.811 numjobs=1 00:34:01.811 00:34:01.811 verify_dump=1 00:34:01.811 verify_backlog=512 00:34:01.811 verify_state_save=0 00:34:01.811 do_verify=1 00:34:01.811 verify=crc32c-intel 00:34:01.811 [job0] 00:34:01.811 filename=/dev/nvme0n1 00:34:01.811 [job1] 00:34:01.811 filename=/dev/nvme0n2 00:34:01.811 [job2] 00:34:01.811 filename=/dev/nvme0n3 00:34:01.811 [job3] 00:34:01.811 filename=/dev/nvme0n4 00:34:01.811 Could not set queue depth (nvme0n1) 00:34:01.811 Could not set queue depth (nvme0n2) 00:34:01.811 Could not set queue depth (nvme0n3) 00:34:01.811 Could not set queue depth (nvme0n4) 00:34:01.811 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:01.811 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:01.811 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:01.811 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:01.811 fio-3.35 00:34:01.811 Starting 4 threads 00:34:03.187 00:34:03.187 job0: (groupid=0, jobs=1): err= 0: pid=115757: Wed Nov 6 14:38:30 2024 00:34:03.187 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:34:03.187 slat (nsec): min=5858, max=28536, avg=8381.02, stdev=1661.48 00:34:03.187 clat (usec): min=145, max=41419, avg=254.55, stdev=910.79 00:34:03.187 lat (usec): min=153, max=41428, avg=262.93, stdev=910.80 00:34:03.187 clat percentiles (usec): 00:34:03.187 | 1.00th=[ 155], 5.00th=[ 167], 10.00th=[ 206], 20.00th=[ 217], 00:34:03.187 | 30.00th=[ 223], 40.00th=[ 229], 50.00th=[ 233], 60.00th=[ 237], 00:34:03.187 | 70.00th=[ 241], 80.00th=[ 247], 90.00th=[ 265], 95.00th=[ 318], 00:34:03.187 | 99.00th=[ 343], 99.50th=[ 347], 99.90th=[ 388], 99.95th=[ 742], 00:34:03.187 | 99.99th=[41681] 00:34:03.187 write: IOPS=2296, BW=9187KiB/s (9407kB/s)(9196KiB/1001msec); 0 zone resets 00:34:03.187 slat (nsec): min=8040, max=99754, avg=13501.26, stdev=4010.98 00:34:03.187 clat (usec): min=100, max=1751, avg=185.66, stdev=36.59 00:34:03.187 lat (usec): min=122, max=1764, avg=199.16, stdev=36.67 00:34:03.187 clat percentiles (usec): 00:34:03.187 | 1.00th=[ 157], 5.00th=[ 165], 10.00th=[ 169], 20.00th=[ 174], 00:34:03.187 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 184], 60.00th=[ 188], 00:34:03.187 | 70.00th=[ 192], 80.00th=[ 196], 90.00th=[ 202], 95.00th=[ 208], 00:34:03.187 | 99.00th=[ 225], 99.50th=[ 255], 99.90th=[ 379], 99.95th=[ 383], 00:34:03.187 | 99.99th=[ 1745] 00:34:03.187 bw ( KiB/s): min= 8624, max= 8624, per=18.39%, avg=8624.00, stdev= 0.00, samples=1 00:34:03.187 iops : min= 2156, max= 2156, avg=2156.00, stdev= 0.00, samples=1 00:34:03.187 lat (usec) : 250=91.63%, 500=8.30%, 750=0.02% 00:34:03.187 lat (msec) : 2=0.02%, 50=0.02% 00:34:03.187 cpu : usr=0.70%, sys=3.70%, ctx=4347, majf=0, minf=7 00:34:03.187 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:03.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:03.187 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:03.187 issued rwts: total=2048,2299,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:03.187 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:03.187 job1: (groupid=0, jobs=1): err= 0: pid=115758: Wed Nov 6 14:38:30 2024 00:34:03.187 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:34:03.187 slat (nsec): min=7157, max=77085, avg=10962.71, stdev=5536.12 00:34:03.187 clat (usec): min=110, max=411, avg=141.47, stdev=15.33 00:34:03.187 lat (usec): min=119, max=419, avg=152.44, stdev=17.56 00:34:03.187 clat percentiles (usec): 00:34:03.187 | 1.00th=[ 120], 5.00th=[ 125], 10.00th=[ 128], 20.00th=[ 133], 00:34:03.187 | 30.00th=[ 135], 40.00th=[ 137], 50.00th=[ 141], 60.00th=[ 143], 00:34:03.187 | 70.00th=[ 145], 80.00th=[ 149], 90.00th=[ 157], 95.00th=[ 163], 00:34:03.187 | 99.00th=[ 186], 99.50th=[ 202], 99.90th=[ 326], 99.95th=[ 404], 00:34:03.187 | 99.99th=[ 412] 00:34:03.187 write: IOPS=3694, BW=14.4MiB/s (15.1MB/s)(14.4MiB/1001msec); 0 zone resets 00:34:03.187 slat (nsec): min=10516, max=88275, avg=14704.78, stdev=4751.80 00:34:03.187 clat (usec): min=75, max=339, avg=105.98, stdev=13.48 00:34:03.187 lat (usec): min=95, max=352, avg=120.68, stdev=14.78 00:34:03.187 clat percentiles (usec): 00:34:03.187 | 1.00th=[ 86], 5.00th=[ 91], 10.00th=[ 94], 20.00th=[ 98], 00:34:03.187 | 30.00th=[ 100], 40.00th=[ 102], 50.00th=[ 104], 60.00th=[ 106], 00:34:03.187 | 70.00th=[ 110], 80.00th=[ 113], 90.00th=[ 118], 95.00th=[ 125], 00:34:03.187 | 99.00th=[ 149], 99.50th=[ 165], 99.90th=[ 245], 99.95th=[ 273], 00:34:03.187 | 99.99th=[ 338] 00:34:03.187 bw ( KiB/s): min=16384, max=16384, per=34.93%, avg=16384.00, stdev= 0.00, samples=1 00:34:03.187 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:34:03.187 lat (usec) : 100=15.06%, 250=84.81%, 500=0.12% 00:34:03.187 cpu : usr=0.90%, sys=6.80%, ctx=7290, majf=0, minf=17 00:34:03.187 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:03.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:03.187 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:03.187 issued rwts: total=3584,3698,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:03.187 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:03.187 job2: (groupid=0, jobs=1): err= 0: pid=115759: Wed Nov 6 14:38:30 2024 00:34:03.187 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:34:03.187 slat (nsec): min=4678, max=20700, avg=6695.27, stdev=1371.93 00:34:03.187 clat (usec): min=152, max=41437, avg=256.27, stdev=911.00 00:34:03.187 lat (usec): min=159, max=41444, avg=262.96, stdev=911.02 00:34:03.187 clat percentiles (usec): 00:34:03.187 | 1.00th=[ 163], 5.00th=[ 176], 10.00th=[ 208], 20.00th=[ 219], 00:34:03.187 | 30.00th=[ 227], 40.00th=[ 231], 50.00th=[ 235], 60.00th=[ 239], 00:34:03.187 | 70.00th=[ 243], 80.00th=[ 249], 90.00th=[ 269], 95.00th=[ 310], 00:34:03.187 | 99.00th=[ 334], 99.50th=[ 343], 99.90th=[ 383], 99.95th=[ 594], 00:34:03.187 | 99.99th=[41681] 00:34:03.187 write: IOPS=2297, BW=9191KiB/s (9411kB/s)(9200KiB/1001msec); 0 zone resets 00:34:03.187 slat (nsec): min=7327, max=97048, avg=12741.68, stdev=4137.21 00:34:03.187 clat (usec): min=105, max=1701, avg=186.44, stdev=35.22 00:34:03.187 lat (usec): min=123, max=1711, avg=199.18, stdev=35.29 00:34:03.187 clat percentiles (usec): 00:34:03.187 | 1.00th=[ 159], 5.00th=[ 165], 10.00th=[ 169], 20.00th=[ 176], 00:34:03.187 | 30.00th=[ 180], 40.00th=[ 182], 50.00th=[ 186], 60.00th=[ 188], 00:34:03.187 | 70.00th=[ 192], 80.00th=[ 196], 90.00th=[ 202], 95.00th=[ 208], 00:34:03.187 | 99.00th=[ 227], 99.50th=[ 235], 99.90th=[ 375], 99.95th=[ 416], 00:34:03.187 | 99.99th=[ 1696] 00:34:03.187 bw ( KiB/s): min= 8624, max= 8624, per=18.39%, avg=8624.00, stdev= 0.00, samples=1 00:34:03.187 iops : min= 2156, max= 2156, avg=2156.00, stdev= 0.00, samples=1 00:34:03.187 lat (usec) : 250=90.87%, 500=9.06%, 750=0.02% 00:34:03.187 lat (msec) : 2=0.02%, 50=0.02% 00:34:03.187 cpu : usr=0.50%, sys=3.30%, ctx=4348, majf=0, minf=13 00:34:03.187 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:03.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:03.187 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:03.187 issued rwts: total=2048,2300,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:03.187 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:03.187 job3: (groupid=0, jobs=1): err= 0: pid=115760: Wed Nov 6 14:38:30 2024 00:34:03.187 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:34:03.187 slat (nsec): min=6284, max=25251, avg=6991.78, stdev=1602.54 00:34:03.187 clat (usec): min=134, max=346, avg=163.79, stdev=14.82 00:34:03.187 lat (usec): min=140, max=353, avg=170.78, stdev=14.87 00:34:03.187 clat percentiles (usec): 00:34:03.187 | 1.00th=[ 141], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 153], 00:34:03.187 | 30.00th=[ 157], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 165], 00:34:03.187 | 70.00th=[ 169], 80.00th=[ 174], 90.00th=[ 180], 95.00th=[ 188], 00:34:03.187 | 99.00th=[ 206], 99.50th=[ 221], 99.90th=[ 310], 99.95th=[ 318], 00:34:03.187 | 99.99th=[ 347] 00:34:03.187 write: IOPS=3436, BW=13.4MiB/s (14.1MB/s)(13.4MiB/1001msec); 0 zone resets 00:34:03.187 slat (nsec): min=9816, max=93644, avg=11916.78, stdev=5421.61 00:34:03.187 clat (usec): min=94, max=427, avg=124.71, stdev=17.17 00:34:03.187 lat (usec): min=105, max=437, avg=136.62, stdev=18.81 00:34:03.187 clat percentiles (usec): 00:34:03.187 | 1.00th=[ 103], 5.00th=[ 109], 10.00th=[ 112], 20.00th=[ 115], 00:34:03.187 | 30.00th=[ 118], 40.00th=[ 120], 50.00th=[ 123], 60.00th=[ 125], 00:34:03.188 | 70.00th=[ 128], 80.00th=[ 133], 90.00th=[ 141], 95.00th=[ 147], 00:34:03.188 | 99.00th=[ 172], 99.50th=[ 229], 99.90th=[ 310], 99.95th=[ 392], 00:34:03.188 | 99.99th=[ 429] 00:34:03.188 bw ( KiB/s): min=13704, max=13704, per=29.22%, avg=13704.00, stdev= 0.00, samples=1 00:34:03.188 iops : min= 3426, max= 3426, avg=3426.00, stdev= 0.00, samples=1 00:34:03.188 lat (usec) : 100=0.14%, 250=99.57%, 500=0.29% 00:34:03.188 cpu : usr=1.70%, sys=4.20%, ctx=6512, majf=0, minf=11 00:34:03.188 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:03.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:03.188 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:03.188 issued rwts: total=3072,3440,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:03.188 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:03.188 00:34:03.188 Run status group 0 (all jobs): 00:34:03.188 READ: bw=42.0MiB/s (44.0MB/s), 8184KiB/s-14.0MiB/s (8380kB/s-14.7MB/s), io=42.0MiB (44.0MB), run=1001-1001msec 00:34:03.188 WRITE: bw=45.8MiB/s (48.0MB/s), 9187KiB/s-14.4MiB/s (9407kB/s-15.1MB/s), io=45.8MiB (48.1MB), run=1001-1001msec 00:34:03.188 00:34:03.188 Disk stats (read/write): 00:34:03.188 nvme0n1: ios=1810/2048, merge=0/0, ticks=473/386, in_queue=859, util=89.68% 00:34:03.188 nvme0n2: ios=3121/3331, merge=0/0, ticks=464/375, in_queue=839, util=89.84% 00:34:03.188 nvme0n3: ios=1787/2048, merge=0/0, ticks=485/376, in_queue=861, util=90.03% 00:34:03.188 nvme0n4: ios=2655/3072, merge=0/0, ticks=439/396, in_queue=835, util=89.98% 00:34:03.188 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:34:03.188 [global] 00:34:03.188 thread=1 00:34:03.188 invalidate=1 00:34:03.188 rw=randwrite 00:34:03.188 time_based=1 00:34:03.188 runtime=1 00:34:03.188 ioengine=libaio 00:34:03.188 direct=1 00:34:03.188 bs=4096 00:34:03.188 iodepth=1 00:34:03.188 norandommap=0 00:34:03.188 numjobs=1 00:34:03.188 00:34:03.188 verify_dump=1 00:34:03.188 verify_backlog=512 00:34:03.188 verify_state_save=0 00:34:03.188 do_verify=1 00:34:03.188 verify=crc32c-intel 00:34:03.188 [job0] 00:34:03.188 filename=/dev/nvme0n1 00:34:03.188 [job1] 00:34:03.188 filename=/dev/nvme0n2 00:34:03.188 [job2] 00:34:03.188 filename=/dev/nvme0n3 00:34:03.188 [job3] 00:34:03.188 filename=/dev/nvme0n4 00:34:03.188 Could not set queue depth (nvme0n1) 00:34:03.188 Could not set queue depth (nvme0n2) 00:34:03.188 Could not set queue depth (nvme0n3) 00:34:03.188 Could not set queue depth (nvme0n4) 00:34:03.188 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:03.188 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:03.188 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:03.188 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:03.188 fio-3.35 00:34:03.188 Starting 4 threads 00:34:04.563 00:34:04.563 job0: (groupid=0, jobs=1): err= 0: pid=115814: Wed Nov 6 14:38:31 2024 00:34:04.563 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:34:04.563 slat (nsec): min=4565, max=27068, avg=6850.19, stdev=2448.93 00:34:04.563 clat (usec): min=146, max=611, avg=243.05, stdev=37.43 00:34:04.563 lat (usec): min=153, max=616, avg=249.90, stdev=37.94 00:34:04.563 clat percentiles (usec): 00:34:04.563 | 1.00th=[ 153], 5.00th=[ 167], 10.00th=[ 219], 20.00th=[ 229], 00:34:04.563 | 30.00th=[ 233], 40.00th=[ 237], 50.00th=[ 241], 60.00th=[ 245], 00:34:04.563 | 70.00th=[ 249], 80.00th=[ 255], 90.00th=[ 273], 95.00th=[ 322], 00:34:04.563 | 99.00th=[ 351], 99.50th=[ 363], 99.90th=[ 465], 99.95th=[ 545], 00:34:04.563 | 99.99th=[ 611] 00:34:04.563 write: IOPS=2442, BW=9770KiB/s (10.0MB/s)(9780KiB/1001msec); 0 zone resets 00:34:04.563 slat (usec): min=7, max=125, avg=12.94, stdev= 4.91 00:34:04.563 clat (usec): min=105, max=845, avg=185.27, stdev=20.96 00:34:04.563 lat (usec): min=121, max=868, avg=198.22, stdev=21.34 00:34:04.563 clat percentiles (usec): 00:34:04.563 | 1.00th=[ 159], 5.00th=[ 165], 10.00th=[ 169], 20.00th=[ 176], 00:34:04.563 | 30.00th=[ 180], 40.00th=[ 182], 50.00th=[ 184], 60.00th=[ 188], 00:34:04.563 | 70.00th=[ 190], 80.00th=[ 194], 90.00th=[ 200], 95.00th=[ 206], 00:34:04.563 | 99.00th=[ 223], 99.50th=[ 239], 99.90th=[ 363], 99.95th=[ 537], 00:34:04.563 | 99.99th=[ 848] 00:34:04.563 bw ( KiB/s): min= 9720, max= 9720, per=20.01%, avg=9720.00, stdev= 0.00, samples=1 00:34:04.563 iops : min= 2430, max= 2430, avg=2430.00, stdev= 0.00, samples=1 00:34:04.563 lat (usec) : 250=86.60%, 500=13.31%, 750=0.07%, 1000=0.02% 00:34:04.563 cpu : usr=0.70%, sys=3.30%, ctx=4494, majf=0, minf=15 00:34:04.563 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:04.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:04.563 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:04.563 issued rwts: total=2048,2445,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:04.563 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:04.563 job1: (groupid=0, jobs=1): err= 0: pid=115815: Wed Nov 6 14:38:31 2024 00:34:04.563 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:34:04.563 slat (nsec): min=7904, max=68980, avg=9916.48, stdev=3766.09 00:34:04.563 clat (usec): min=115, max=373, avg=140.40, stdev=11.99 00:34:04.563 lat (usec): min=124, max=381, avg=150.32, stdev=13.38 00:34:04.563 clat percentiles (usec): 00:34:04.563 | 1.00th=[ 122], 5.00th=[ 127], 10.00th=[ 130], 20.00th=[ 133], 00:34:04.563 | 30.00th=[ 135], 40.00th=[ 137], 50.00th=[ 139], 60.00th=[ 141], 00:34:04.563 | 70.00th=[ 145], 80.00th=[ 147], 90.00th=[ 153], 95.00th=[ 159], 00:34:04.563 | 99.00th=[ 178], 99.50th=[ 184], 99.90th=[ 231], 99.95th=[ 359], 00:34:04.563 | 99.99th=[ 375] 00:34:04.563 write: IOPS=3800, BW=14.8MiB/s (15.6MB/s)(14.9MiB/1001msec); 0 zone resets 00:34:04.563 slat (usec): min=11, max=129, avg=14.15, stdev= 3.88 00:34:04.563 clat (usec): min=81, max=269, avg=105.36, stdev=12.09 00:34:04.563 lat (usec): min=94, max=288, avg=119.51, stdev=13.11 00:34:04.563 clat percentiles (usec): 00:34:04.563 | 1.00th=[ 87], 5.00th=[ 92], 10.00th=[ 94], 20.00th=[ 97], 00:34:04.563 | 30.00th=[ 100], 40.00th=[ 101], 50.00th=[ 104], 60.00th=[ 106], 00:34:04.563 | 70.00th=[ 109], 80.00th=[ 113], 90.00th=[ 118], 95.00th=[ 126], 00:34:04.563 | 99.00th=[ 149], 99.50th=[ 153], 99.90th=[ 212], 99.95th=[ 265], 00:34:04.563 | 99.99th=[ 269] 00:34:04.563 bw ( KiB/s): min=16384, max=16384, per=33.73%, avg=16384.00, stdev= 0.00, samples=1 00:34:04.563 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:34:04.563 lat (usec) : 100=17.22%, 250=82.72%, 500=0.07% 00:34:04.563 cpu : usr=1.20%, sys=6.10%, ctx=7396, majf=0, minf=9 00:34:04.563 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:04.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:04.563 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:04.563 issued rwts: total=3584,3804,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:04.563 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:04.563 job2: (groupid=0, jobs=1): err= 0: pid=115816: Wed Nov 6 14:38:31 2024 00:34:04.563 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:34:04.563 slat (nsec): min=5565, max=66000, avg=8778.82, stdev=3045.49 00:34:04.563 clat (usec): min=144, max=544, avg=241.05, stdev=32.59 00:34:04.563 lat (usec): min=151, max=554, avg=249.83, stdev=33.00 00:34:04.563 clat percentiles (usec): 00:34:04.563 | 1.00th=[ 165], 5.00th=[ 180], 10.00th=[ 217], 20.00th=[ 227], 00:34:04.563 | 30.00th=[ 231], 40.00th=[ 235], 50.00th=[ 239], 60.00th=[ 243], 00:34:04.563 | 70.00th=[ 247], 80.00th=[ 253], 90.00th=[ 269], 95.00th=[ 310], 00:34:04.563 | 99.00th=[ 330], 99.50th=[ 343], 99.90th=[ 515], 99.95th=[ 537], 00:34:04.563 | 99.99th=[ 545] 00:34:04.563 write: IOPS=2445, BW=9782KiB/s (10.0MB/s)(9792KiB/1001msec); 0 zone resets 00:34:04.563 slat (nsec): min=7946, max=56978, avg=13712.87, stdev=4173.31 00:34:04.563 clat (usec): min=99, max=798, avg=184.26, stdev=20.38 00:34:04.563 lat (usec): min=112, max=808, avg=197.98, stdev=20.23 00:34:04.563 clat percentiles (usec): 00:34:04.563 | 1.00th=[ 155], 5.00th=[ 165], 10.00th=[ 169], 20.00th=[ 174], 00:34:04.563 | 30.00th=[ 178], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 186], 00:34:04.563 | 70.00th=[ 190], 80.00th=[ 194], 90.00th=[ 200], 95.00th=[ 204], 00:34:04.563 | 99.00th=[ 219], 99.50th=[ 245], 99.90th=[ 408], 99.95th=[ 457], 00:34:04.563 | 99.99th=[ 799] 00:34:04.563 bw ( KiB/s): min= 9755, max= 9755, per=20.09%, avg=9755.00, stdev= 0.00, samples=1 00:34:04.563 iops : min= 2438, max= 2438, avg=2438.00, stdev= 0.00, samples=1 00:34:04.563 lat (usec) : 100=0.02%, 250=88.41%, 500=11.48%, 750=0.07%, 1000=0.02% 00:34:04.563 cpu : usr=0.50%, sys=4.10%, ctx=4496, majf=0, minf=9 00:34:04.563 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:04.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:04.563 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:04.563 issued rwts: total=2048,2448,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:04.563 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:04.563 job3: (groupid=0, jobs=1): err= 0: pid=115817: Wed Nov 6 14:38:31 2024 00:34:04.563 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:34:04.563 slat (nsec): min=6268, max=25415, avg=6940.55, stdev=1380.04 00:34:04.563 clat (usec): min=134, max=327, avg=164.01, stdev=13.53 00:34:04.563 lat (usec): min=141, max=335, avg=170.95, stdev=13.57 00:34:04.563 clat percentiles (usec): 00:34:04.563 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 155], 00:34:04.563 | 30.00th=[ 157], 40.00th=[ 159], 50.00th=[ 163], 60.00th=[ 165], 00:34:04.563 | 70.00th=[ 169], 80.00th=[ 174], 90.00th=[ 180], 95.00th=[ 188], 00:34:04.563 | 99.00th=[ 204], 99.50th=[ 212], 99.90th=[ 269], 99.95th=[ 306], 00:34:04.563 | 99.99th=[ 330] 00:34:04.563 write: IOPS=3453, BW=13.5MiB/s (14.1MB/s)(13.5MiB/1001msec); 0 zone resets 00:34:04.563 slat (nsec): min=9531, max=87391, avg=11823.81, stdev=5653.11 00:34:04.563 clat (usec): min=76, max=1598, avg=124.03, stdev=29.71 00:34:04.563 lat (usec): min=105, max=1608, avg=135.86, stdev=30.62 00:34:04.563 clat percentiles (usec): 00:34:04.563 | 1.00th=[ 102], 5.00th=[ 108], 10.00th=[ 111], 20.00th=[ 114], 00:34:04.563 | 30.00th=[ 117], 40.00th=[ 119], 50.00th=[ 122], 60.00th=[ 124], 00:34:04.563 | 70.00th=[ 127], 80.00th=[ 133], 90.00th=[ 141], 95.00th=[ 149], 00:34:04.563 | 99.00th=[ 172], 99.50th=[ 202], 99.90th=[ 306], 99.95th=[ 379], 00:34:04.563 | 99.99th=[ 1598] 00:34:04.563 bw ( KiB/s): min=13720, max=13720, per=28.25%, avg=13720.00, stdev= 0.00, samples=1 00:34:04.563 iops : min= 3430, max= 3430, avg=3430.00, stdev= 0.00, samples=1 00:34:04.563 lat (usec) : 100=0.21%, 250=99.54%, 500=0.23% 00:34:04.563 lat (msec) : 2=0.02% 00:34:04.563 cpu : usr=0.70%, sys=5.10%, ctx=6532, majf=0, minf=18 00:34:04.563 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:04.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:04.564 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:04.564 issued rwts: total=3072,3457,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:04.564 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:04.564 00:34:04.564 Run status group 0 (all jobs): 00:34:04.564 READ: bw=42.0MiB/s (44.0MB/s), 8184KiB/s-14.0MiB/s (8380kB/s-14.7MB/s), io=42.0MiB (44.0MB), run=1001-1001msec 00:34:04.564 WRITE: bw=47.4MiB/s (49.7MB/s), 9770KiB/s-14.8MiB/s (10.0MB/s-15.6MB/s), io=47.5MiB (49.8MB), run=1001-1001msec 00:34:04.564 00:34:04.564 Disk stats (read/write): 00:34:04.564 nvme0n1: ios=1936/2048, merge=0/0, ticks=453/374, in_queue=827, util=89.30% 00:34:04.564 nvme0n2: ios=3121/3499, merge=0/0, ticks=450/390, in_queue=840, util=89.76% 00:34:04.564 nvme0n3: ios=1917/2048, merge=0/0, ticks=493/386, in_queue=879, util=90.64% 00:34:04.564 nvme0n4: ios=2698/3072, merge=0/0, ticks=476/381, in_queue=857, util=90.20% 00:34:04.564 14:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:34:04.564 [global] 00:34:04.564 thread=1 00:34:04.564 invalidate=1 00:34:04.564 rw=write 00:34:04.564 time_based=1 00:34:04.564 runtime=1 00:34:04.564 ioengine=libaio 00:34:04.564 direct=1 00:34:04.564 bs=4096 00:34:04.564 iodepth=128 00:34:04.564 norandommap=0 00:34:04.564 numjobs=1 00:34:04.564 00:34:04.564 verify_dump=1 00:34:04.564 verify_backlog=512 00:34:04.564 verify_state_save=0 00:34:04.564 do_verify=1 00:34:04.564 verify=crc32c-intel 00:34:04.564 [job0] 00:34:04.564 filename=/dev/nvme0n1 00:34:04.564 [job1] 00:34:04.564 filename=/dev/nvme0n2 00:34:04.564 [job2] 00:34:04.564 filename=/dev/nvme0n3 00:34:04.564 [job3] 00:34:04.564 filename=/dev/nvme0n4 00:34:04.564 Could not set queue depth (nvme0n1) 00:34:04.564 Could not set queue depth (nvme0n2) 00:34:04.564 Could not set queue depth (nvme0n3) 00:34:04.564 Could not set queue depth (nvme0n4) 00:34:04.564 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:04.564 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:04.564 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:04.564 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:04.564 fio-3.35 00:34:04.564 Starting 4 threads 00:34:05.940 00:34:05.940 job0: (groupid=0, jobs=1): err= 0: pid=115876: Wed Nov 6 14:38:33 2024 00:34:05.940 read: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:34:05.940 slat (usec): min=3, max=6056, avg=139.24, stdev=578.33 00:34:05.940 clat (usec): min=7105, max=26697, avg=17742.75, stdev=4694.09 00:34:05.940 lat (usec): min=7117, max=26707, avg=17881.98, stdev=4728.82 00:34:05.940 clat percentiles (usec): 00:34:05.940 | 1.00th=[ 8160], 5.00th=[ 8586], 10.00th=[ 9372], 20.00th=[11338], 00:34:05.940 | 30.00th=[17433], 40.00th=[18744], 50.00th=[19268], 60.00th=[20055], 00:34:05.940 | 70.00th=[20841], 80.00th=[21627], 90.00th=[21890], 95.00th=[23200], 00:34:05.940 | 99.00th=[24249], 99.50th=[24511], 99.90th=[26084], 99.95th=[26608], 00:34:05.940 | 99.99th=[26608] 00:34:05.940 write: IOPS=3836, BW=15.0MiB/s (15.7MB/s)(15.1MiB/1005msec); 0 zone resets 00:34:05.941 slat (usec): min=7, max=5316, avg=126.10, stdev=611.80 00:34:05.941 clat (usec): min=3596, max=27154, avg=16506.73, stdev=5286.14 00:34:05.941 lat (usec): min=4854, max=27166, avg=16632.83, stdev=5299.11 00:34:05.941 clat percentiles (usec): 00:34:05.941 | 1.00th=[ 7046], 5.00th=[ 8094], 10.00th=[ 8356], 20.00th=[ 8717], 00:34:05.941 | 30.00th=[15139], 40.00th=[18220], 50.00th=[18744], 60.00th=[19530], 00:34:05.941 | 70.00th=[20055], 80.00th=[20579], 90.00th=[21103], 95.00th=[22152], 00:34:05.941 | 99.00th=[23987], 99.50th=[26608], 99.90th=[27132], 99.95th=[27132], 00:34:05.941 | 99.99th=[27132] 00:34:05.941 bw ( KiB/s): min=12736, max=17088, per=18.82%, avg=14912.00, stdev=3077.33, samples=2 00:34:05.941 iops : min= 3184, max= 4272, avg=3728.00, stdev=769.33, samples=2 00:34:05.941 lat (msec) : 4=0.01%, 10=20.74%, 20=43.37%, 50=35.87% 00:34:05.941 cpu : usr=1.59%, sys=6.37%, ctx=783, majf=0, minf=11 00:34:05.941 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:34:05.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:05.941 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:05.941 issued rwts: total=3584,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:05.941 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:05.941 job1: (groupid=0, jobs=1): err= 0: pid=115877: Wed Nov 6 14:38:33 2024 00:34:05.941 read: IOPS=6503, BW=25.4MiB/s (26.6MB/s)(25.5MiB/1002msec) 00:34:05.941 slat (usec): min=5, max=6914, avg=76.04, stdev=342.32 00:34:05.941 clat (usec): min=667, max=22989, avg=9502.26, stdev=2638.61 00:34:05.941 lat (usec): min=1850, max=23628, avg=9578.31, stdev=2660.63 00:34:05.941 clat percentiles (usec): 00:34:05.941 | 1.00th=[ 5866], 5.00th=[ 7504], 10.00th=[ 7767], 20.00th=[ 8225], 00:34:05.941 | 30.00th=[ 8455], 40.00th=[ 8717], 50.00th=[ 8979], 60.00th=[ 9241], 00:34:05.941 | 70.00th=[ 9634], 80.00th=[10028], 90.00th=[10552], 95.00th=[15926], 00:34:05.941 | 99.00th=[21365], 99.50th=[21627], 99.90th=[22414], 99.95th=[22938], 00:34:05.941 | 99.99th=[22938] 00:34:05.941 write: IOPS=6642, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1002msec); 0 zone resets 00:34:05.941 slat (usec): min=7, max=5416, avg=71.95, stdev=303.38 00:34:05.941 clat (usec): min=6486, max=20841, avg=9675.05, stdev=2914.01 00:34:05.941 lat (usec): min=6499, max=20853, avg=9747.00, stdev=2930.88 00:34:05.941 clat percentiles (usec): 00:34:05.941 | 1.00th=[ 7242], 5.00th=[ 8029], 10.00th=[ 8160], 20.00th=[ 8455], 00:34:05.941 | 30.00th=[ 8586], 40.00th=[ 8717], 50.00th=[ 8848], 60.00th=[ 8979], 00:34:05.941 | 70.00th=[ 9110], 80.00th=[ 9241], 90.00th=[10814], 95.00th=[19268], 00:34:05.941 | 99.00th=[20579], 99.50th=[20841], 99.90th=[20841], 99.95th=[20841], 00:34:05.941 | 99.99th=[20841] 00:34:05.941 bw ( KiB/s): min=24625, max=28672, per=33.63%, avg=26648.50, stdev=2861.66, samples=2 00:34:05.941 iops : min= 6156, max= 7168, avg=6662.00, stdev=715.59, samples=2 00:34:05.941 lat (usec) : 750=0.01% 00:34:05.941 lat (msec) : 2=0.11%, 4=0.19%, 10=82.43%, 20=14.80%, 50=2.47% 00:34:05.941 cpu : usr=3.20%, sys=9.69%, ctx=970, majf=0, minf=7 00:34:05.941 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:34:05.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:05.941 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:05.941 issued rwts: total=6517,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:05.941 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:05.941 job2: (groupid=0, jobs=1): err= 0: pid=115878: Wed Nov 6 14:38:33 2024 00:34:05.941 read: IOPS=5792, BW=22.6MiB/s (23.7MB/s)(22.7MiB/1003msec) 00:34:05.941 slat (usec): min=3, max=2676, avg=83.77, stdev=306.85 00:34:05.941 clat (usec): min=1022, max=14219, avg=10661.57, stdev=1122.28 00:34:05.941 lat (usec): min=2546, max=14284, avg=10745.34, stdev=1118.92 00:34:05.941 clat percentiles (usec): 00:34:05.941 | 1.00th=[ 5997], 5.00th=[ 9110], 10.00th=[ 9503], 20.00th=[10028], 00:34:05.941 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10683], 60.00th=[10814], 00:34:05.941 | 70.00th=[11207], 80.00th=[11469], 90.00th=[11731], 95.00th=[12125], 00:34:05.941 | 99.00th=[13042], 99.50th=[13042], 99.90th=[14222], 99.95th=[14222], 00:34:05.941 | 99.99th=[14222] 00:34:05.941 write: IOPS=6125, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec); 0 zone resets 00:34:05.941 slat (usec): min=6, max=3948, avg=79.54, stdev=293.41 00:34:05.941 clat (usec): min=7585, max=13435, avg=10542.24, stdev=804.53 00:34:05.941 lat (usec): min=7853, max=13664, avg=10621.78, stdev=765.05 00:34:05.941 clat percentiles (usec): 00:34:05.941 | 1.00th=[ 8356], 5.00th=[ 9372], 10.00th=[ 9765], 20.00th=[10028], 00:34:05.941 | 30.00th=[10159], 40.00th=[10290], 50.00th=[10421], 60.00th=[10683], 00:34:05.941 | 70.00th=[10814], 80.00th=[11076], 90.00th=[11600], 95.00th=[11994], 00:34:05.941 | 99.00th=[12649], 99.50th=[12911], 99.90th=[13173], 99.95th=[13435], 00:34:05.941 | 99.99th=[13435] 00:34:05.941 bw ( KiB/s): min=24576, max=24625, per=31.05%, avg=24600.50, stdev=34.65, samples=2 00:34:05.941 iops : min= 6144, max= 6156, avg=6150.00, stdev= 8.49, samples=2 00:34:05.941 lat (msec) : 2=0.01%, 4=0.25%, 10=18.60%, 20=81.14% 00:34:05.941 cpu : usr=3.09%, sys=9.18%, ctx=863, majf=0, minf=9 00:34:05.941 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:34:05.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:05.941 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:05.941 issued rwts: total=5810,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:05.941 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:05.941 job3: (groupid=0, jobs=1): err= 0: pid=115879: Wed Nov 6 14:38:33 2024 00:34:05.941 read: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec) 00:34:05.941 slat (usec): min=2, max=6668, avg=161.13, stdev=596.21 00:34:05.941 clat (usec): min=13251, max=25574, avg=20527.22, stdev=1633.87 00:34:05.941 lat (usec): min=13259, max=27691, avg=20688.35, stdev=1611.07 00:34:05.941 clat percentiles (usec): 00:34:05.941 | 1.00th=[16188], 5.00th=[17695], 10.00th=[18744], 20.00th=[19268], 00:34:05.941 | 30.00th=[19792], 40.00th=[20055], 50.00th=[20579], 60.00th=[20841], 00:34:05.941 | 70.00th=[21627], 80.00th=[21890], 90.00th=[22414], 95.00th=[22938], 00:34:05.941 | 99.00th=[24511], 99.50th=[24511], 99.90th=[24773], 99.95th=[25560], 00:34:05.941 | 99.99th=[25560] 00:34:05.941 write: IOPS=3242, BW=12.7MiB/s (13.3MB/s)(12.7MiB/1003msec); 0 zone resets 00:34:05.941 slat (usec): min=4, max=5672, avg=151.24, stdev=702.66 00:34:05.941 clat (usec): min=2134, max=25159, avg=19488.59, stdev=2357.52 00:34:05.941 lat (usec): min=2629, max=25283, avg=19639.83, stdev=2278.82 00:34:05.941 clat percentiles (usec): 00:34:05.941 | 1.00th=[ 7177], 5.00th=[16057], 10.00th=[17695], 20.00th=[18482], 00:34:05.941 | 30.00th=[19006], 40.00th=[19530], 50.00th=[19792], 60.00th=[20055], 00:34:05.941 | 70.00th=[20317], 80.00th=[20579], 90.00th=[21103], 95.00th=[22414], 00:34:05.941 | 99.00th=[23987], 99.50th=[23987], 99.90th=[25035], 99.95th=[25035], 00:34:05.941 | 99.99th=[25035] 00:34:05.941 bw ( KiB/s): min=12344, max=12656, per=15.78%, avg=12500.00, stdev=220.62, samples=2 00:34:05.941 iops : min= 3086, max= 3164, avg=3125.00, stdev=55.15, samples=2 00:34:05.941 lat (msec) : 4=0.19%, 10=0.63%, 20=46.13%, 50=53.05% 00:34:05.941 cpu : usr=1.30%, sys=5.69%, ctx=765, majf=0, minf=23 00:34:05.941 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:34:05.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:05.941 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:05.941 issued rwts: total=3072,3252,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:05.941 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:05.941 00:34:05.941 Run status group 0 (all jobs): 00:34:05.941 READ: bw=73.8MiB/s (77.4MB/s), 12.0MiB/s-25.4MiB/s (12.5MB/s-26.6MB/s), io=74.2MiB (77.8MB), run=1002-1005msec 00:34:05.941 WRITE: bw=77.4MiB/s (81.1MB/s), 12.7MiB/s-25.9MiB/s (13.3MB/s-27.2MB/s), io=77.8MiB (81.5MB), run=1002-1005msec 00:34:05.941 00:34:05.941 Disk stats (read/write): 00:34:05.941 nvme0n1: ios=3169/3584, merge=0/0, ticks=15277/14885, in_queue=30162, util=89.68% 00:34:05.941 nvme0n2: ios=5678/5783, merge=0/0, ticks=17406/16598, in_queue=34004, util=90.04% 00:34:05.941 nvme0n3: ios=5156/5331, merge=0/0, ticks=13771/12971, in_queue=26742, util=90.57% 00:34:05.941 nvme0n4: ios=2581/3035, merge=0/0, ticks=12961/13675, in_queue=26636, util=90.23% 00:34:05.941 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:34:05.941 [global] 00:34:05.941 thread=1 00:34:05.941 invalidate=1 00:34:05.941 rw=randwrite 00:34:05.941 time_based=1 00:34:05.941 runtime=1 00:34:05.941 ioengine=libaio 00:34:05.941 direct=1 00:34:05.941 bs=4096 00:34:05.941 iodepth=128 00:34:05.941 norandommap=0 00:34:05.941 numjobs=1 00:34:05.941 00:34:05.941 verify_dump=1 00:34:05.941 verify_backlog=512 00:34:05.941 verify_state_save=0 00:34:05.941 do_verify=1 00:34:05.941 verify=crc32c-intel 00:34:05.941 [job0] 00:34:05.941 filename=/dev/nvme0n1 00:34:05.941 [job1] 00:34:05.941 filename=/dev/nvme0n2 00:34:05.941 [job2] 00:34:05.941 filename=/dev/nvme0n3 00:34:05.941 [job3] 00:34:05.941 filename=/dev/nvme0n4 00:34:05.941 Could not set queue depth (nvme0n1) 00:34:05.941 Could not set queue depth (nvme0n2) 00:34:05.941 Could not set queue depth (nvme0n3) 00:34:05.941 Could not set queue depth (nvme0n4) 00:34:05.941 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:05.941 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:05.941 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:05.941 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:05.941 fio-3.35 00:34:05.941 Starting 4 threads 00:34:06.877 00:34:06.877 job0: (groupid=0, jobs=1): err= 0: pid=115935: Wed Nov 6 14:38:34 2024 00:34:06.877 read: IOPS=6660, BW=26.0MiB/s (27.3MB/s)(26.1MiB/1003msec) 00:34:06.877 slat (usec): min=4, max=3254, avg=72.89, stdev=265.48 00:34:06.877 clat (usec): min=2168, max=11762, avg=9146.11, stdev=766.29 00:34:06.877 lat (usec): min=2688, max=11768, avg=9219.01, stdev=754.82 00:34:06.877 clat percentiles (usec): 00:34:06.877 | 1.00th=[ 7504], 5.00th=[ 7898], 10.00th=[ 8160], 20.00th=[ 8586], 00:34:06.877 | 30.00th=[ 8979], 40.00th=[ 9110], 50.00th=[ 9241], 60.00th=[ 9372], 00:34:06.877 | 70.00th=[ 9503], 80.00th=[ 9634], 90.00th=[10028], 95.00th=[10421], 00:34:06.877 | 99.00th=[10814], 99.50th=[10945], 99.90th=[11076], 99.95th=[11731], 00:34:06.877 | 99.99th=[11731] 00:34:06.877 write: IOPS=7146, BW=27.9MiB/s (29.3MB/s)(28.0MiB/1003msec); 0 zone resets 00:34:06.877 slat (usec): min=5, max=2079, avg=67.71, stdev=197.58 00:34:06.877 clat (usec): min=3928, max=11452, avg=9181.83, stdev=648.11 00:34:06.877 lat (usec): min=4309, max=11465, avg=9249.54, stdev=624.66 00:34:06.877 clat percentiles (usec): 00:34:06.877 | 1.00th=[ 7242], 5.00th=[ 8291], 10.00th=[ 8586], 20.00th=[ 8848], 00:34:06.877 | 30.00th=[ 8979], 40.00th=[ 9110], 50.00th=[ 9110], 60.00th=[ 9241], 00:34:06.877 | 70.00th=[ 9372], 80.00th=[ 9503], 90.00th=[ 9896], 95.00th=[10421], 00:34:06.877 | 99.00th=[10814], 99.50th=[10814], 99.90th=[11076], 99.95th=[11469], 00:34:06.877 | 99.99th=[11469] 00:34:06.877 bw ( KiB/s): min=27848, max=28672, per=35.84%, avg=28260.00, stdev=582.66, samples=2 00:34:06.877 iops : min= 6962, max= 7168, avg=7065.00, stdev=145.66, samples=2 00:34:06.877 lat (msec) : 4=0.18%, 10=89.93%, 20=9.89% 00:34:06.877 cpu : usr=2.69%, sys=10.28%, ctx=1169, majf=0, minf=9 00:34:06.877 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:34:06.877 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.877 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:06.877 issued rwts: total=6680,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:06.877 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:06.877 job1: (groupid=0, jobs=1): err= 0: pid=115936: Wed Nov 6 14:38:34 2024 00:34:06.877 read: IOPS=3155, BW=12.3MiB/s (12.9MB/s)(12.4MiB/1003msec) 00:34:06.877 slat (usec): min=3, max=7100, avg=137.88, stdev=626.59 00:34:06.877 clat (usec): min=1373, max=26622, avg=17108.64, stdev=3350.05 00:34:06.877 lat (usec): min=3803, max=28408, avg=17246.52, stdev=3346.15 00:34:06.877 clat percentiles (usec): 00:34:06.877 | 1.00th=[ 5014], 5.00th=[12911], 10.00th=[14222], 20.00th=[14484], 00:34:06.877 | 30.00th=[14615], 40.00th=[15926], 50.00th=[16909], 60.00th=[17957], 00:34:06.877 | 70.00th=[19006], 80.00th=[20055], 90.00th=[21627], 95.00th=[21890], 00:34:06.877 | 99.00th=[24249], 99.50th=[26346], 99.90th=[26608], 99.95th=[26608], 00:34:06.877 | 99.99th=[26608] 00:34:06.877 write: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec); 0 zone resets 00:34:06.877 slat (usec): min=5, max=4527, avg=153.22, stdev=602.74 00:34:06.877 clat (usec): min=8638, max=34194, avg=20165.13, stdev=5574.66 00:34:06.877 lat (usec): min=10992, max=34209, avg=20318.34, stdev=5586.59 00:34:06.877 clat percentiles (usec): 00:34:06.877 | 1.00th=[10945], 5.00th=[11207], 10.00th=[11469], 20.00th=[16581], 00:34:06.877 | 30.00th=[18482], 40.00th=[18744], 50.00th=[19006], 60.00th=[19530], 00:34:06.877 | 70.00th=[21103], 80.00th=[26084], 90.00th=[29230], 95.00th=[30016], 00:34:06.877 | 99.00th=[31327], 99.50th=[32113], 99.90th=[34341], 99.95th=[34341], 00:34:06.877 | 99.99th=[34341] 00:34:06.877 bw ( KiB/s): min=12312, max=16104, per=18.02%, avg=14208.00, stdev=2681.35, samples=2 00:34:06.877 iops : min= 3078, max= 4026, avg=3552.00, stdev=670.34, samples=2 00:34:06.877 lat (msec) : 2=0.01%, 4=0.27%, 10=0.86%, 20=70.32%, 50=28.54% 00:34:06.877 cpu : usr=1.30%, sys=6.59%, ctx=414, majf=0, minf=15 00:34:06.877 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:34:06.877 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.877 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:06.877 issued rwts: total=3165,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:06.877 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:06.877 job2: (groupid=0, jobs=1): err= 0: pid=115937: Wed Nov 6 14:38:34 2024 00:34:06.877 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:34:06.877 slat (usec): min=4, max=6537, avg=145.08, stdev=738.47 00:34:06.877 clat (usec): min=11558, max=26821, avg=18592.39, stdev=2964.39 00:34:06.877 lat (usec): min=12974, max=30375, avg=18737.48, stdev=2921.82 00:34:06.877 clat percentiles (usec): 00:34:06.877 | 1.00th=[12649], 5.00th=[15008], 10.00th=[15008], 20.00th=[15139], 00:34:06.877 | 30.00th=[15795], 40.00th=[17695], 50.00th=[19006], 60.00th=[20055], 00:34:06.877 | 70.00th=[20841], 80.00th=[21365], 90.00th=[21890], 95.00th=[22676], 00:34:06.877 | 99.00th=[25560], 99.50th=[26084], 99.90th=[26608], 99.95th=[26870], 00:34:06.878 | 99.99th=[26870] 00:34:06.878 write: IOPS=3394, BW=13.3MiB/s (13.9MB/s)(13.3MiB/1004msec); 0 zone resets 00:34:06.878 slat (usec): min=8, max=6735, avg=158.59, stdev=630.96 00:34:06.878 clat (usec): min=1820, max=44946, avg=20427.27, stdev=6259.48 00:34:06.878 lat (usec): min=4191, max=44963, avg=20585.85, stdev=6273.10 00:34:06.878 clat percentiles (usec): 00:34:06.878 | 1.00th=[11207], 5.00th=[14353], 10.00th=[14484], 20.00th=[16909], 00:34:06.878 | 30.00th=[18220], 40.00th=[18744], 50.00th=[19006], 60.00th=[19268], 00:34:06.878 | 70.00th=[19792], 80.00th=[22676], 90.00th=[28967], 95.00th=[34866], 00:34:06.878 | 99.00th=[43254], 99.50th=[43254], 99.90th=[44827], 99.95th=[44827], 00:34:06.878 | 99.99th=[44827] 00:34:06.878 bw ( KiB/s): min=11376, max=14893, per=16.66%, avg=13134.50, stdev=2486.89, samples=2 00:34:06.878 iops : min= 2844, max= 3723, avg=3283.50, stdev=621.55, samples=2 00:34:06.878 lat (msec) : 2=0.02%, 10=0.49%, 20=66.17%, 50=33.32% 00:34:06.878 cpu : usr=1.40%, sys=6.18%, ctx=341, majf=0, minf=11 00:34:06.878 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:34:06.878 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.878 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:06.878 issued rwts: total=3072,3408,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:06.878 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:06.878 job3: (groupid=0, jobs=1): err= 0: pid=115938: Wed Nov 6 14:38:34 2024 00:34:06.878 read: IOPS=5554, BW=21.7MiB/s (22.8MB/s)(21.7MiB/1002msec) 00:34:06.878 slat (usec): min=5, max=2720, avg=91.12, stdev=334.46 00:34:06.878 clat (usec): min=1063, max=15105, avg=11284.32, stdev=1519.84 00:34:06.878 lat (usec): min=2248, max=15116, avg=11375.44, stdev=1516.42 00:34:06.878 clat percentiles (usec): 00:34:06.878 | 1.00th=[ 6259], 5.00th=[ 9241], 10.00th=[ 9765], 20.00th=[10159], 00:34:06.878 | 30.00th=[10552], 40.00th=[10814], 50.00th=[11207], 60.00th=[11731], 00:34:06.878 | 70.00th=[12387], 80.00th=[12649], 90.00th=[12911], 95.00th=[13173], 00:34:06.878 | 99.00th=[14353], 99.50th=[14615], 99.90th=[14877], 99.95th=[15139], 00:34:06.878 | 99.99th=[15139] 00:34:06.878 write: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec); 0 zone resets 00:34:06.878 slat (usec): min=7, max=2775, avg=83.23, stdev=248.03 00:34:06.878 clat (usec): min=7875, max=14983, avg=11323.09, stdev=1142.15 00:34:06.878 lat (usec): min=7894, max=14997, avg=11406.32, stdev=1128.07 00:34:06.878 clat percentiles (usec): 00:34:06.878 | 1.00th=[ 9110], 5.00th=[ 9765], 10.00th=[10028], 20.00th=[10290], 00:34:06.878 | 30.00th=[10421], 40.00th=[10683], 50.00th=[11207], 60.00th=[11863], 00:34:06.878 | 70.00th=[12125], 80.00th=[12387], 90.00th=[12518], 95.00th=[13042], 00:34:06.878 | 99.00th=[14484], 99.50th=[14615], 99.90th=[15008], 99.95th=[15008], 00:34:06.878 | 99.99th=[15008] 00:34:06.878 bw ( KiB/s): min=20521, max=24576, per=28.60%, avg=22548.50, stdev=2867.32, samples=2 00:34:06.878 iops : min= 5130, max= 6144, avg=5637.00, stdev=717.01, samples=2 00:34:06.878 lat (msec) : 2=0.01%, 4=0.29%, 10=10.85%, 20=88.86% 00:34:06.878 cpu : usr=2.50%, sys=9.19%, ctx=950, majf=0, minf=16 00:34:06.878 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:34:06.878 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.878 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:06.878 issued rwts: total=5566,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:06.878 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:06.878 00:34:06.878 Run status group 0 (all jobs): 00:34:06.878 READ: bw=71.9MiB/s (75.4MB/s), 12.0MiB/s-26.0MiB/s (12.5MB/s-27.3MB/s), io=72.2MiB (75.7MB), run=1002-1004msec 00:34:06.878 WRITE: bw=77.0MiB/s (80.7MB/s), 13.3MiB/s-27.9MiB/s (13.9MB/s-29.3MB/s), io=77.3MiB (81.1MB), run=1002-1004msec 00:34:06.878 00:34:06.878 Disk stats (read/write): 00:34:06.878 nvme0n1: ios=6097/6144, merge=0/0, ticks=13668/13114, in_queue=26782, util=89.98% 00:34:06.878 nvme0n2: ios=3020/3072, merge=0/0, ticks=12870/13995, in_queue=26865, util=90.14% 00:34:06.878 nvme0n3: ios=2605/3055, merge=0/0, ticks=12033/15195, in_queue=27228, util=90.87% 00:34:06.878 nvme0n4: ios=4644/5119, merge=0/0, ticks=13472/13492, in_queue=26964, util=90.84% 00:34:06.878 14:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:34:07.136 14:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=115953 00:34:07.136 14:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:34:07.136 14:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:34:07.136 [global] 00:34:07.136 thread=1 00:34:07.136 invalidate=1 00:34:07.136 rw=read 00:34:07.136 time_based=1 00:34:07.136 runtime=10 00:34:07.136 ioengine=libaio 00:34:07.136 direct=1 00:34:07.136 bs=4096 00:34:07.136 iodepth=1 00:34:07.136 norandommap=1 00:34:07.136 numjobs=1 00:34:07.136 00:34:07.136 [job0] 00:34:07.136 filename=/dev/nvme0n1 00:34:07.136 [job1] 00:34:07.136 filename=/dev/nvme0n2 00:34:07.136 [job2] 00:34:07.136 filename=/dev/nvme0n3 00:34:07.136 [job3] 00:34:07.136 filename=/dev/nvme0n4 00:34:07.136 Could not set queue depth (nvme0n1) 00:34:07.136 Could not set queue depth (nvme0n2) 00:34:07.136 Could not set queue depth (nvme0n3) 00:34:07.136 Could not set queue depth (nvme0n4) 00:34:07.136 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:07.136 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:07.136 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:07.136 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:07.136 fio-3.35 00:34:07.136 Starting 4 threads 00:34:10.422 14:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:34:10.422 fio: pid=115996, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:10.422 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=56664064, buflen=4096 00:34:10.422 14:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:34:10.422 fio: pid=115995, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:10.422 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=49520640, buflen=4096 00:34:10.422 14:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:10.422 14:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:34:10.680 fio: pid=115993, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:10.680 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=9306112, buflen=4096 00:34:10.680 14:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:10.680 14:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:34:10.939 fio: pid=115994, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:10.939 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=21266432, buflen=4096 00:34:10.939 00:34:10.939 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=115993: Wed Nov 6 14:38:38 2024 00:34:10.939 read: IOPS=5592, BW=21.8MiB/s (22.9MB/s)(72.9MiB/3336msec) 00:34:10.939 slat (usec): min=4, max=16797, avg= 9.78, stdev=187.25 00:34:10.939 clat (usec): min=110, max=3595, avg=168.44, stdev=54.06 00:34:10.939 lat (usec): min=118, max=18860, avg=178.23, stdev=203.93 00:34:10.939 clat percentiles (usec): 00:34:10.939 | 1.00th=[ 129], 5.00th=[ 137], 10.00th=[ 139], 20.00th=[ 143], 00:34:10.939 | 30.00th=[ 147], 40.00th=[ 151], 50.00th=[ 155], 60.00th=[ 161], 00:34:10.939 | 70.00th=[ 172], 80.00th=[ 190], 90.00th=[ 223], 95.00th=[ 237], 00:34:10.939 | 99.00th=[ 262], 99.50th=[ 322], 99.90th=[ 611], 99.95th=[ 1029], 00:34:10.939 | 99.99th=[ 2057] 00:34:10.939 bw ( KiB/s): min=19424, max=25792, per=31.22%, avg=22753.50, stdev=2660.89, samples=6 00:34:10.939 iops : min= 4856, max= 6448, avg=5688.33, stdev=665.21, samples=6 00:34:10.939 lat (usec) : 250=98.00%, 500=1.86%, 750=0.06%, 1000=0.02% 00:34:10.939 lat (msec) : 2=0.04%, 4=0.01% 00:34:10.939 cpu : usr=0.42%, sys=3.48%, ctx=18664, majf=0, minf=1 00:34:10.939 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:10.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.939 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.939 issued rwts: total=18657,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:10.939 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:10.939 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=115994: Wed Nov 6 14:38:38 2024 00:34:10.939 read: IOPS=5942, BW=23.2MiB/s (24.3MB/s)(84.3MiB/3631msec) 00:34:10.939 slat (usec): min=4, max=12379, avg=10.63, stdev=158.18 00:34:10.939 clat (usec): min=97, max=1833, avg=157.17, stdev=44.31 00:34:10.939 lat (usec): min=102, max=12651, avg=167.80, stdev=164.68 00:34:10.939 clat percentiles (usec): 00:34:10.939 | 1.00th=[ 113], 5.00th=[ 119], 10.00th=[ 122], 20.00th=[ 128], 00:34:10.939 | 30.00th=[ 133], 40.00th=[ 139], 50.00th=[ 147], 60.00th=[ 153], 00:34:10.939 | 70.00th=[ 165], 80.00th=[ 182], 90.00th=[ 217], 95.00th=[ 235], 00:34:10.939 | 99.00th=[ 255], 99.50th=[ 277], 99.90th=[ 465], 99.95th=[ 603], 00:34:10.939 | 99.99th=[ 1483] 00:34:10.939 bw ( KiB/s): min=19816, max=28024, per=32.12%, avg=23412.33, stdev=3472.16, samples=6 00:34:10.939 iops : min= 4954, max= 7006, avg=5853.00, stdev=867.94, samples=6 00:34:10.939 lat (usec) : 100=0.01%, 250=98.43%, 500=1.48%, 750=0.05% 00:34:10.939 lat (msec) : 2=0.02% 00:34:10.939 cpu : usr=0.39%, sys=3.77%, ctx=21591, majf=0, minf=1 00:34:10.939 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:10.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.939 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.939 issued rwts: total=21577,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:10.939 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:10.939 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=115995: Wed Nov 6 14:38:38 2024 00:34:10.939 read: IOPS=3864, BW=15.1MiB/s (15.8MB/s)(47.2MiB/3129msec) 00:34:10.939 slat (usec): min=5, max=10673, avg=10.09, stdev=118.50 00:34:10.939 clat (usec): min=113, max=1619, avg=248.02, stdev=59.32 00:34:10.939 lat (usec): min=121, max=10963, avg=258.12, stdev=133.60 00:34:10.939 clat percentiles (usec): 00:34:10.939 | 1.00th=[ 135], 5.00th=[ 147], 10.00th=[ 200], 20.00th=[ 217], 00:34:10.940 | 30.00th=[ 225], 40.00th=[ 229], 50.00th=[ 237], 60.00th=[ 245], 00:34:10.940 | 70.00th=[ 260], 80.00th=[ 297], 90.00th=[ 326], 95.00th=[ 343], 00:34:10.940 | 99.00th=[ 383], 99.50th=[ 420], 99.90th=[ 553], 99.95th=[ 725], 00:34:10.940 | 99.99th=[ 1532] 00:34:10.940 bw ( KiB/s): min=14128, max=16272, per=20.74%, avg=15115.17, stdev=861.40, samples=6 00:34:10.940 iops : min= 3532, max= 4068, avg=3778.67, stdev=215.51, samples=6 00:34:10.940 lat (usec) : 250=64.14%, 500=35.69%, 750=0.12%, 1000=0.02% 00:34:10.940 lat (msec) : 2=0.02% 00:34:10.940 cpu : usr=0.29%, sys=2.37%, ctx=12096, majf=0, minf=2 00:34:10.940 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:10.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.940 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.940 issued rwts: total=12091,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:10.940 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:10.940 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=115996: Wed Nov 6 14:38:38 2024 00:34:10.940 read: IOPS=4728, BW=18.5MiB/s (19.4MB/s)(54.0MiB/2926msec) 00:34:10.940 slat (nsec): min=6393, max=92946, avg=8640.49, stdev=2148.47 00:34:10.940 clat (usec): min=105, max=1524, avg=202.19, stdev=42.45 00:34:10.940 lat (usec): min=113, max=1533, avg=210.83, stdev=42.78 00:34:10.940 clat percentiles (usec): 00:34:10.940 | 1.00th=[ 121], 5.00th=[ 128], 10.00th=[ 133], 20.00th=[ 157], 00:34:10.940 | 30.00th=[ 192], 40.00th=[ 206], 50.00th=[ 217], 60.00th=[ 223], 00:34:10.940 | 70.00th=[ 227], 80.00th=[ 233], 90.00th=[ 241], 95.00th=[ 249], 00:34:10.940 | 99.00th=[ 265], 99.50th=[ 273], 99.90th=[ 326], 99.95th=[ 367], 00:34:10.940 | 99.99th=[ 1418] 00:34:10.940 bw ( KiB/s): min=17000, max=20544, per=25.47%, avg=18560.00, stdev=1483.19, samples=5 00:34:10.940 iops : min= 4250, max= 5136, avg=4640.00, stdev=370.80, samples=5 00:34:10.940 lat (usec) : 250=95.69%, 500=4.28%, 750=0.01% 00:34:10.940 lat (msec) : 2=0.01% 00:34:10.940 cpu : usr=0.34%, sys=2.70%, ctx=13835, majf=0, minf=2 00:34:10.940 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:10.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.940 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.940 issued rwts: total=13835,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:10.940 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:10.940 00:34:10.940 Run status group 0 (all jobs): 00:34:10.940 READ: bw=71.2MiB/s (74.6MB/s), 15.1MiB/s-23.2MiB/s (15.8MB/s-24.3MB/s), io=258MiB (271MB), run=2926-3631msec 00:34:10.940 00:34:10.940 Disk stats (read/write): 00:34:10.940 nvme0n1: ios=17694/0, merge=0/0, ticks=3001/0, in_queue=3001, util=95.78% 00:34:10.940 nvme0n2: ios=19670/0, merge=0/0, ticks=3200/0, in_queue=3200, util=95.47% 00:34:10.940 nvme0n3: ios=10828/0, merge=0/0, ticks=2821/0, in_queue=2821, util=96.72% 00:34:10.940 nvme0n4: ios=13633/0, merge=0/0, ticks=2810/0, in_queue=2810, util=96.78% 00:34:10.940 14:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:10.940 14:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:34:11.515 14:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:11.515 14:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:34:11.515 14:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:11.515 14:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:34:11.774 14:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:11.774 14:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:34:12.032 14:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:12.032 14:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:34:12.290 14:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:34:12.290 14:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 115953 00:34:12.290 14:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:34:12.290 14:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:12.290 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:12.290 14:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:12.290 14:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:34:12.290 14:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:34:12.290 14:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:12.290 14:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:34:12.290 14:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:12.290 nvmf hotplug test: fio failed as expected 00:34:12.290 14:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:34:12.290 14:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:34:12.290 14:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:34:12.290 14:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:12.551 14:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:34:12.551 14:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:34:12.551 14:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:34:12.551 14:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:34:12.551 14:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:34:12.551 14:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:12.551 14:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:34:12.551 14:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:12.551 14:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:34:12.551 14:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:12.551 14:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:12.551 rmmod nvme_tcp 00:34:12.551 rmmod nvme_fabrics 00:34:12.551 rmmod nvme_keyring 00:34:12.551 14:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:12.551 14:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:34:12.551 14:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:34:12.551 14:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 115472 ']' 00:34:12.551 14:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 115472 00:34:12.551 14:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 115472 ']' 00:34:12.551 14:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 115472 00:34:12.551 14:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:34:12.551 14:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:12.551 14:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 115472 00:34:12.551 killing process with pid 115472 00:34:12.551 14:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:34:12.551 14:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:34:12.551 14:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 115472' 00:34:12.551 14:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 115472 00:34:12.551 14:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 115472 00:34:13.493 14:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:13.493 14:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:13.493 14:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:13.493 14:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:34:13.493 14:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:34:13.493 14:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:13.493 14:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:34:13.493 14:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:13.493 14:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:34:13.493 14:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:34:13.493 14:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:34:13.493 14:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:34:13.493 14:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:34:13.493 14:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:34:13.493 14:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:34:13.494 14:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:34:13.494 14:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:34:13.494 14:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:34:13.494 14:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:34:13.494 14:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:34:13.494 14:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:34:13.494 14:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:34:13.494 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:34:13.494 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:13.494 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:13.494 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:13.494 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:34:13.494 ************************************ 00:34:13.494 END TEST nvmf_fio_target 00:34:13.494 ************************************ 00:34:13.494 00:34:13.494 real 0m19.497s 00:34:13.494 user 0m57.753s 00:34:13.494 sys 0m8.318s 00:34:13.494 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:13.494 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:13.494 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:34:13.494 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:34:13.494 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:13.494 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:13.494 ************************************ 00:34:13.494 START TEST nvmf_bdevio 00:34:13.494 ************************************ 00:34:13.494 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:34:13.754 * Looking for test storage... 00:34:13.754 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:34:13.754 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:13.754 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:34:13.754 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:13.754 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:13.754 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:13.754 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:13.754 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:13.754 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:34:13.754 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:34:13.754 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:34:13.754 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:34:13.754 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:34:13.754 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:34:13.754 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:34:13.754 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:13.754 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:34:13.754 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:34:13.754 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:13.754 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:13.754 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:34:13.754 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:34:13.754 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:13.754 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:34:13.754 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:34:13.754 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:34:13.754 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:34:13.754 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:13.754 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:34:13.754 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:34:13.754 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:13.754 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:13.754 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:34:13.754 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:13.754 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:13.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:13.754 --rc genhtml_branch_coverage=1 00:34:13.754 --rc genhtml_function_coverage=1 00:34:13.754 --rc genhtml_legend=1 00:34:13.754 --rc geninfo_all_blocks=1 00:34:13.754 --rc geninfo_unexecuted_blocks=1 00:34:13.754 00:34:13.754 ' 00:34:13.754 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:13.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:13.754 --rc genhtml_branch_coverage=1 00:34:13.754 --rc genhtml_function_coverage=1 00:34:13.754 --rc genhtml_legend=1 00:34:13.754 --rc geninfo_all_blocks=1 00:34:13.754 --rc geninfo_unexecuted_blocks=1 00:34:13.754 00:34:13.754 ' 00:34:13.754 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:13.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:13.754 --rc genhtml_branch_coverage=1 00:34:13.754 --rc genhtml_function_coverage=1 00:34:13.754 --rc genhtml_legend=1 00:34:13.754 --rc geninfo_all_blocks=1 00:34:13.754 --rc geninfo_unexecuted_blocks=1 00:34:13.754 00:34:13.754 ' 00:34:13.754 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:13.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:13.754 --rc genhtml_branch_coverage=1 00:34:13.754 --rc genhtml_function_coverage=1 00:34:13.754 --rc genhtml_legend=1 00:34:13.754 --rc geninfo_all_blocks=1 00:34:13.754 --rc geninfo_unexecuted_blocks=1 00:34:13.754 00:34:13.754 ' 00:34:13.754 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:34:13.754 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:34:13.754 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:13.754 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:13.754 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:13.754 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:13.754 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:13.754 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:13.754 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:13.754 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:13.754 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:13.754 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:13.754 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:34:13.754 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:34:13.754 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:13.754 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:13.754 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:34:13.754 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:13.754 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:13.754 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:34:13.754 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:13.754 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:13.754 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:13.755 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.755 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.755 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.755 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:34:13.755 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.755 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:34:13.755 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:13.755 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:13.755 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:13.755 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:13.755 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:13.755 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:13.755 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:13.755 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:13.755 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:13.755 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:13.755 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:13.755 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:13.755 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:34:13.755 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:13.755 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:13.755 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:13.755 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:13.755 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:13.755 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:13.755 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:13.755 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:13.755 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:34:13.755 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:34:13.755 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:34:13.755 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:34:13.755 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:34:13.755 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:34:13.755 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:13.755 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:34:13.755 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:34:13.755 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:34:13.755 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:13.755 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:34:13.755 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:34:13.755 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:34:13.755 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:34:13.755 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:34:13.755 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:34:13.755 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:13.755 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:34:13.755 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:34:13.755 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:34:13.755 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:34:13.755 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:34:13.755 Cannot find device "nvmf_init_br" 00:34:13.755 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:34:13.755 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:34:13.755 Cannot find device "nvmf_init_br2" 00:34:13.755 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:34:13.755 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:34:13.755 Cannot find device "nvmf_tgt_br" 00:34:13.755 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:34:13.755 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:34:13.755 Cannot find device "nvmf_tgt_br2" 00:34:13.755 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:34:13.755 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:34:13.755 Cannot find device "nvmf_init_br" 00:34:13.755 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:34:13.755 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:34:13.755 Cannot find device "nvmf_init_br2" 00:34:13.755 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:34:13.755 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:34:13.755 Cannot find device "nvmf_tgt_br" 00:34:13.755 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:34:13.755 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:34:13.755 Cannot find device "nvmf_tgt_br2" 00:34:13.755 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:34:13.755 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:34:13.755 Cannot find device "nvmf_br" 00:34:13.755 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:34:13.755 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:34:13.755 Cannot find device "nvmf_init_if" 00:34:13.755 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:34:13.755 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:34:14.017 Cannot find device "nvmf_init_if2" 00:34:14.017 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:34:14.017 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:34:14.017 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:14.017 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:34:14.017 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:34:14.017 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:14.017 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:34:14.017 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:34:14.017 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:34:14.017 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:34:14.017 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:34:14.017 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:34:14.017 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:34:14.017 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:34:14.017 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:34:14.017 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:34:14.017 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:34:14.017 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:34:14.017 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:34:14.017 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:34:14.017 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:34:14.017 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:34:14.017 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:34:14.017 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:34:14.017 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:34:14.017 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:34:14.017 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:34:14.017 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:34:14.017 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:34:14.017 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:34:14.017 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:34:14.017 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:34:14.017 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:34:14.017 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:34:14.017 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:34:14.017 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:34:14.017 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:34:14.017 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:34:14.017 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:34:14.017 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:34:14.017 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:34:14.017 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.092 ms 00:34:14.017 00:34:14.017 --- 10.0.0.3 ping statistics --- 00:34:14.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:14.017 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:34:14.018 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:34:14.018 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:34:14.018 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:34:14.018 00:34:14.018 --- 10.0.0.4 ping statistics --- 00:34:14.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:14.018 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:34:14.018 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:34:14.018 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:14.018 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:34:14.018 00:34:14.018 --- 10.0.0.1 ping statistics --- 00:34:14.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:14.018 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:34:14.018 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:34:14.018 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:14.018 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:34:14.018 00:34:14.018 --- 10.0.0.2 ping statistics --- 00:34:14.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:14.018 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:34:14.018 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:14.018 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:34:14.018 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:14.018 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:14.018 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:14.018 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:14.018 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:14.018 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:14.018 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:14.018 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:34:14.018 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:14.018 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:14.018 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:14.018 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=116375 00:34:14.018 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:34:14.018 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 116375 00:34:14.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:14.018 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 116375 ']' 00:34:14.018 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:14.018 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:14.018 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:14.018 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:14.018 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:14.278 [2024-11-06 14:38:41.673332] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:14.278 [2024-11-06 14:38:41.675495] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:34:14.278 [2024-11-06 14:38:41.675683] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:14.278 [2024-11-06 14:38:41.841293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:14.539 [2024-11-06 14:38:41.949424] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:14.540 [2024-11-06 14:38:41.949479] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:14.540 [2024-11-06 14:38:41.949489] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:14.540 [2024-11-06 14:38:41.949499] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:14.540 [2024-11-06 14:38:41.949507] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:14.540 [2024-11-06 14:38:41.951274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:34:14.540 [2024-11-06 14:38:41.951485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:34:14.540 [2024-11-06 14:38:41.951802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:34:14.540 [2024-11-06 14:38:41.951922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:14.800 [2024-11-06 14:38:42.203813] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:14.800 [2024-11-06 14:38:42.204748] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:14.800 [2024-11-06 14:38:42.205167] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:14.800 [2024-11-06 14:38:42.205673] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:14.800 [2024-11-06 14:38:42.205759] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:15.062 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:15.062 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:34:15.062 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:15.062 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:15.062 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:15.062 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:15.062 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:15.062 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.062 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:15.062 [2024-11-06 14:38:42.581007] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:15.062 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.062 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:15.062 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.062 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:15.062 Malloc0 00:34:15.062 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.062 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:15.062 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.062 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:15.062 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.062 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:15.062 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.062 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:15.324 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.324 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:34:15.324 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.324 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:15.324 [2024-11-06 14:38:42.705077] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:34:15.324 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.324 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:34:15.324 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:34:15.324 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:34:15.324 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:34:15.324 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:15.324 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:15.324 { 00:34:15.324 "params": { 00:34:15.324 "name": "Nvme$subsystem", 00:34:15.324 "trtype": "$TEST_TRANSPORT", 00:34:15.324 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:15.324 "adrfam": "ipv4", 00:34:15.324 "trsvcid": "$NVMF_PORT", 00:34:15.324 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:15.324 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:15.324 "hdgst": ${hdgst:-false}, 00:34:15.324 "ddgst": ${ddgst:-false} 00:34:15.324 }, 00:34:15.324 "method": "bdev_nvme_attach_controller" 00:34:15.324 } 00:34:15.324 EOF 00:34:15.324 )") 00:34:15.324 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:34:15.324 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:34:15.324 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:34:15.324 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:15.324 "params": { 00:34:15.324 "name": "Nvme1", 00:34:15.324 "trtype": "tcp", 00:34:15.324 "traddr": "10.0.0.3", 00:34:15.324 "adrfam": "ipv4", 00:34:15.324 "trsvcid": "4420", 00:34:15.324 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:15.324 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:15.324 "hdgst": false, 00:34:15.324 "ddgst": false 00:34:15.324 }, 00:34:15.324 "method": "bdev_nvme_attach_controller" 00:34:15.324 }' 00:34:15.324 [2024-11-06 14:38:42.771449] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:34:15.324 [2024-11-06 14:38:42.771537] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116429 ] 00:34:15.324 [2024-11-06 14:38:42.925391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:15.585 [2024-11-06 14:38:43.039416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:15.585 [2024-11-06 14:38:43.039707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:15.585 [2024-11-06 14:38:43.039726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:15.845 I/O targets: 00:34:15.845 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:34:15.845 00:34:15.845 00:34:15.845 CUnit - A unit testing framework for C - Version 2.1-3 00:34:15.845 http://cunit.sourceforge.net/ 00:34:15.845 00:34:15.845 00:34:15.845 Suite: bdevio tests on: Nvme1n1 00:34:15.845 Test: blockdev write read block ...passed 00:34:15.845 Test: blockdev write zeroes read block ...passed 00:34:15.845 Test: blockdev write zeroes read no split ...passed 00:34:15.845 Test: blockdev write zeroes read split ...passed 00:34:16.105 Test: blockdev write zeroes read split partial ...passed 00:34:16.105 Test: blockdev reset ...[2024-11-06 14:38:43.502757] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:34:16.105 [2024-11-06 14:38:43.502867] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b280 (9): Bad file descriptor 00:34:16.105 [2024-11-06 14:38:43.507644] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:34:16.105 passed 00:34:16.105 Test: blockdev write read 8 blocks ...passed 00:34:16.105 Test: blockdev write read size > 128k ...passed 00:34:16.105 Test: blockdev write read invalid size ...passed 00:34:16.105 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:34:16.105 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:34:16.105 Test: blockdev write read max offset ...passed 00:34:16.105 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:34:16.105 Test: blockdev writev readv 8 blocks ...passed 00:34:16.105 Test: blockdev writev readv 30 x 1block ...passed 00:34:16.105 Test: blockdev writev readv block ...passed 00:34:16.105 Test: blockdev writev readv size > 128k ...passed 00:34:16.105 Test: blockdev writev readv size > 128k in two iovs ...passed 00:34:16.105 Test: blockdev comparev and writev ...[2024-11-06 14:38:43.684747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:16.105 [2024-11-06 14:38:43.684789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.105 [2024-11-06 14:38:43.684804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:16.105 [2024-11-06 14:38:43.684813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:16.105 [2024-11-06 14:38:43.685185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:16.105 [2024-11-06 14:38:43.685203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:16.105 [2024-11-06 14:38:43.685216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:16.105 [2024-11-06 14:38:43.685223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:16.105 [2024-11-06 14:38:43.685581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:16.105 [2024-11-06 14:38:43.685599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:16.105 [2024-11-06 14:38:43.685611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:16.105 [2024-11-06 14:38:43.685618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:16.105 [2024-11-06 14:38:43.686009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:16.105 [2024-11-06 14:38:43.686028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:16.105 [2024-11-06 14:38:43.686040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:16.105 [2024-11-06 14:38:43.686047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:16.105 passed 00:34:16.364 Test: blockdev nvme passthru rw ...passed 00:34:16.364 Test: blockdev nvme passthru vendor specific ...[2024-11-06 14:38:43.770352] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:16.364 [2024-11-06 14:38:43.770376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:16.364 [2024-11-06 14:38:43.770493] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:16.364 [2024-11-06 14:38:43.770503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:16.364 [2024-11-06 14:38:43.770658] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:16.364 [2024-11-06 14:38:43.770675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:16.364 [2024-11-06 14:38:43.770942] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:16.364 [2024-11-06 14:38:43.770969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:16.364 passed 00:34:16.364 Test: blockdev nvme admin passthru ...passed 00:34:16.364 Test: blockdev copy ...passed 00:34:16.364 00:34:16.364 Run Summary: Type Total Ran Passed Failed Inactive 00:34:16.364 suites 1 1 n/a 0 0 00:34:16.364 tests 23 23 23 0 0 00:34:16.364 asserts 152 152 152 0 n/a 00:34:16.364 00:34:16.364 Elapsed time = 0.954 seconds 00:34:17.301 14:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:17.301 14:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.301 14:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:17.301 14:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.301 14:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:34:17.301 14:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:34:17.301 14:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:17.301 14:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:34:17.301 14:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:17.301 14:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:34:17.301 14:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:17.301 14:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:17.301 rmmod nvme_tcp 00:34:17.301 rmmod nvme_fabrics 00:34:17.301 rmmod nvme_keyring 00:34:17.301 14:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:17.301 14:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:34:17.301 14:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:34:17.301 14:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 116375 ']' 00:34:17.301 14:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 116375 00:34:17.301 14:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 116375 ']' 00:34:17.301 14:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 116375 00:34:17.301 14:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:34:17.301 14:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:17.301 14:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 116375 00:34:17.301 killing process with pid 116375 00:34:17.301 14:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:34:17.301 14:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:34:17.301 14:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 116375' 00:34:17.301 14:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 116375 00:34:17.301 14:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 116375 00:34:17.870 14:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:17.870 14:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:17.870 14:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:17.870 14:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:34:17.870 14:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:34:17.870 14:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:34:17.870 14:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:17.870 14:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:17.870 14:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:34:17.870 14:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:34:17.870 14:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:34:18.131 14:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:34:18.131 14:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:34:18.131 14:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:34:18.131 14:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:34:18.131 14:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:34:18.131 14:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:34:18.131 14:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:34:18.131 14:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:34:18.131 14:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:34:18.131 14:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:34:18.131 14:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:34:18.131 14:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:34:18.131 14:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:18.131 14:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:18.131 14:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:18.131 14:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:34:18.131 00:34:18.131 real 0m4.619s 00:34:18.131 user 0m13.038s 00:34:18.131 sys 0m1.249s 00:34:18.131 14:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:18.131 ************************************ 00:34:18.131 END TEST nvmf_bdevio 00:34:18.131 ************************************ 00:34:18.131 14:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:18.131 14:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:34:18.131 00:34:18.131 real 3m38.079s 00:34:18.131 user 9m44.025s 00:34:18.131 sys 1m5.159s 00:34:18.131 14:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:18.131 ************************************ 00:34:18.131 END TEST nvmf_target_core_interrupt_mode 00:34:18.131 ************************************ 00:34:18.131 14:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:18.394 14:38:45 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /home/vagrant/spdk_repo/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:34:18.394 14:38:45 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:34:18.394 14:38:45 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:18.394 14:38:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:18.394 ************************************ 00:34:18.394 START TEST nvmf_interrupt 00:34:18.394 ************************************ 00:34:18.394 14:38:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:34:18.394 * Looking for test storage... 00:34:18.394 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:34:18.394 14:38:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:18.394 14:38:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lcov --version 00:34:18.394 14:38:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:18.394 14:38:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:18.394 14:38:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:18.394 14:38:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:18.394 14:38:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:18.394 14:38:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:34:18.394 14:38:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:34:18.394 14:38:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:34:18.394 14:38:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:34:18.394 14:38:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:34:18.394 14:38:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:34:18.394 14:38:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:34:18.394 14:38:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:18.394 14:38:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:34:18.394 14:38:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:34:18.394 14:38:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:18.394 14:38:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:18.394 14:38:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:34:18.394 14:38:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:34:18.394 14:38:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:18.394 14:38:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:34:18.394 14:38:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:34:18.394 14:38:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:34:18.394 14:38:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:34:18.394 14:38:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:18.394 14:38:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:34:18.394 14:38:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:34:18.394 14:38:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:18.394 14:38:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:18.394 14:38:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:34:18.394 14:38:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:18.394 14:38:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:18.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:18.394 --rc genhtml_branch_coverage=1 00:34:18.394 --rc genhtml_function_coverage=1 00:34:18.394 --rc genhtml_legend=1 00:34:18.394 --rc geninfo_all_blocks=1 00:34:18.394 --rc geninfo_unexecuted_blocks=1 00:34:18.394 00:34:18.394 ' 00:34:18.394 14:38:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:18.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:18.394 --rc genhtml_branch_coverage=1 00:34:18.394 --rc genhtml_function_coverage=1 00:34:18.394 --rc genhtml_legend=1 00:34:18.394 --rc geninfo_all_blocks=1 00:34:18.394 --rc geninfo_unexecuted_blocks=1 00:34:18.394 00:34:18.394 ' 00:34:18.394 14:38:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:18.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:18.394 --rc genhtml_branch_coverage=1 00:34:18.394 --rc genhtml_function_coverage=1 00:34:18.394 --rc genhtml_legend=1 00:34:18.394 --rc geninfo_all_blocks=1 00:34:18.394 --rc geninfo_unexecuted_blocks=1 00:34:18.394 00:34:18.394 ' 00:34:18.394 14:38:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:18.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:18.394 --rc genhtml_branch_coverage=1 00:34:18.394 --rc genhtml_function_coverage=1 00:34:18.394 --rc genhtml_legend=1 00:34:18.394 --rc geninfo_all_blocks=1 00:34:18.394 --rc geninfo_unexecuted_blocks=1 00:34:18.394 00:34:18.394 ' 00:34:18.394 14:38:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:34:18.394 14:38:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:34:18.394 14:38:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:18.394 14:38:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:18.394 14:38:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:18.394 14:38:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:18.394 14:38:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:18.394 14:38:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:18.394 14:38:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:18.394 14:38:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:18.394 14:38:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:18.394 14:38:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:18.394 14:38:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:34:18.394 14:38:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:34:18.394 14:38:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:18.394 14:38:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:18.394 14:38:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:34:18.394 14:38:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:18.394 14:38:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:18.394 14:38:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:34:18.394 14:38:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:18.394 14:38:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:18.394 14:38:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:18.394 14:38:45 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:18.394 14:38:45 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:18.394 14:38:45 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:18.394 14:38:45 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:34:18.395 14:38:45 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:18.395 14:38:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:34:18.395 14:38:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:18.395 14:38:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:18.395 14:38:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:18.395 14:38:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:18.395 14:38:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:18.395 14:38:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:18.395 14:38:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:18.395 14:38:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:18.395 14:38:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:18.395 14:38:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:18.395 14:38:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/common.sh 00:34:18.395 14:38:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:34:18.395 14:38:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:34:18.395 14:38:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:18.395 14:38:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:18.395 14:38:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:18.395 14:38:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:18.395 14:38:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:18.395 14:38:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:18.395 14:38:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:18.395 14:38:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:18.395 14:38:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:34:18.395 14:38:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:34:18.395 14:38:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:34:18.395 14:38:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:34:18.395 14:38:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:34:18.395 14:38:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@460 -- # nvmf_veth_init 00:34:18.395 14:38:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:18.395 14:38:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:34:18.395 14:38:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:34:18.395 14:38:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:34:18.395 14:38:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:18.395 14:38:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:34:18.395 14:38:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:34:18.395 14:38:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:34:18.395 14:38:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:34:18.395 14:38:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:34:18.395 14:38:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:34:18.395 14:38:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:18.395 14:38:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:34:18.395 14:38:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:34:18.395 14:38:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:34:18.395 14:38:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:34:18.395 14:38:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:34:18.395 Cannot find device "nvmf_init_br" 00:34:18.395 14:38:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@162 -- # true 00:34:18.395 14:38:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:34:18.395 Cannot find device "nvmf_init_br2" 00:34:18.395 14:38:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@163 -- # true 00:34:18.395 14:38:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:34:18.395 Cannot find device "nvmf_tgt_br" 00:34:18.395 14:38:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@164 -- # true 00:34:18.395 14:38:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:34:18.395 Cannot find device "nvmf_tgt_br2" 00:34:18.395 14:38:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@165 -- # true 00:34:18.395 14:38:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:34:18.395 Cannot find device "nvmf_init_br" 00:34:18.395 14:38:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@166 -- # true 00:34:18.395 14:38:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:34:18.395 Cannot find device "nvmf_init_br2" 00:34:18.395 14:38:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@167 -- # true 00:34:18.395 14:38:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:34:18.656 Cannot find device "nvmf_tgt_br" 00:34:18.656 14:38:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@168 -- # true 00:34:18.656 14:38:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:34:18.656 Cannot find device "nvmf_tgt_br2" 00:34:18.656 14:38:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@169 -- # true 00:34:18.656 14:38:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:34:18.656 Cannot find device "nvmf_br" 00:34:18.656 14:38:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@170 -- # true 00:34:18.656 14:38:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:34:18.656 Cannot find device "nvmf_init_if" 00:34:18.656 14:38:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@171 -- # true 00:34:18.656 14:38:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:34:18.656 Cannot find device "nvmf_init_if2" 00:34:18.656 14:38:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@172 -- # true 00:34:18.656 14:38:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:34:18.656 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:18.656 14:38:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@173 -- # true 00:34:18.656 14:38:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:34:18.656 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:18.656 14:38:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@174 -- # true 00:34:18.656 14:38:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:34:18.656 14:38:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:34:18.656 14:38:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:34:18.656 14:38:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:34:18.656 14:38:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:34:18.656 14:38:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:34:18.656 14:38:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:34:18.656 14:38:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:34:18.656 14:38:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:34:18.656 14:38:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:34:18.656 14:38:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:34:18.656 14:38:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:34:18.656 14:38:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:34:18.656 14:38:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:34:18.656 14:38:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:34:18.656 14:38:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:34:18.656 14:38:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:34:18.656 14:38:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:34:18.656 14:38:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:34:18.656 14:38:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:34:18.656 14:38:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:34:18.656 14:38:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:34:18.656 14:38:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:34:18.656 14:38:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:34:18.656 14:38:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:34:18.656 14:38:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:34:18.656 14:38:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:34:18.656 14:38:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:34:18.656 14:38:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:34:18.656 14:38:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:34:18.656 14:38:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:34:18.656 14:38:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:34:18.656 14:38:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:34:18.656 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:34:18.656 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:34:18.656 00:34:18.656 --- 10.0.0.3 ping statistics --- 00:34:18.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:18.656 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:34:18.656 14:38:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:34:18.656 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:34:18.656 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:34:18.656 00:34:18.656 --- 10.0.0.4 ping statistics --- 00:34:18.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:18.656 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:34:18.656 14:38:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:34:18.656 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:18.656 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:34:18.656 00:34:18.656 --- 10.0.0.1 ping statistics --- 00:34:18.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:18.656 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:34:18.656 14:38:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:34:18.656 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:18.656 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.041 ms 00:34:18.656 00:34:18.656 --- 10.0.0.2 ping statistics --- 00:34:18.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:18.656 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:34:18.656 14:38:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:18.656 14:38:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@461 -- # return 0 00:34:18.656 14:38:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:18.656 14:38:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:18.656 14:38:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:18.656 14:38:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:18.656 14:38:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:18.656 14:38:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:18.656 14:38:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:18.656 14:38:46 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:34:18.656 14:38:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:18.656 14:38:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:18.656 14:38:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:18.656 14:38:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=116714 00:34:18.656 14:38:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 116714 00:34:18.656 14:38:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@833 -- # '[' -z 116714 ']' 00:34:18.656 14:38:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:18.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:18.656 14:38:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:34:18.656 14:38:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:18.656 14:38:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:18.656 14:38:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:18.656 14:38:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:18.917 [2024-11-06 14:38:46.359379] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:18.917 [2024-11-06 14:38:46.361439] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:34:18.917 [2024-11-06 14:38:46.361535] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:18.917 [2024-11-06 14:38:46.525259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:19.185 [2024-11-06 14:38:46.672504] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:19.185 [2024-11-06 14:38:46.672577] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:19.185 [2024-11-06 14:38:46.672596] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:19.185 [2024-11-06 14:38:46.672608] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:19.185 [2024-11-06 14:38:46.672617] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:19.185 [2024-11-06 14:38:46.674922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:19.185 [2024-11-06 14:38:46.675034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:19.445 [2024-11-06 14:38:46.938529] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:19.445 [2024-11-06 14:38:46.939263] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:19.445 [2024-11-06 14:38:46.939298] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:19.706 14:38:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:19.706 14:38:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@866 -- # return 0 00:34:19.706 14:38:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:19.706 14:38:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:19.706 14:38:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:19.706 14:38:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:19.706 14:38:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:34:19.706 14:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:34:19.706 14:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:34:19.706 14:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:34:19.706 5000+0 records in 00:34:19.706 5000+0 records out 00:34:19.706 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0195862 s, 523 MB/s 00:34:19.706 14:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aiofile AIO0 2048 00:34:19.706 14:38:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.706 14:38:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:19.706 AIO0 00:34:19.706 14:38:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.706 14:38:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:34:19.706 14:38:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.706 14:38:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:19.706 [2024-11-06 14:38:47.328460] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:19.968 14:38:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.968 14:38:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:19.968 14:38:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.968 14:38:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:19.968 14:38:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.968 14:38:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:34:19.968 14:38:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.968 14:38:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:19.968 14:38:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.968 14:38:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:34:19.968 14:38:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.968 14:38:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:19.968 [2024-11-06 14:38:47.360548] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:34:19.968 14:38:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.968 14:38:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:34:19.968 14:38:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 116714 0 00:34:19.968 14:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 116714 0 idle 00:34:19.968 14:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=116714 00:34:19.968 14:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:19.968 14:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:19.968 14:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:19.968 14:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:19.968 14:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:19.968 14:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:19.968 14:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:19.968 14:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:19.968 14:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:19.968 14:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:19.968 14:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 116714 -w 256 00:34:19.968 14:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 116714 root 20 0 20.1t 180488 102528 S 0.0 1.5 0:00.57 reactor_0' 00:34:19.968 14:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 116714 root 20 0 20.1t 180488 102528 S 0.0 1.5 0:00.57 reactor_0 00:34:19.968 14:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:19.968 14:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:19.968 14:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:19.968 14:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:19.968 14:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:19.968 14:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:19.968 14:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:19.968 14:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:19.968 14:38:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:34:19.968 14:38:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 116714 1 00:34:19.968 14:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 116714 1 idle 00:34:19.968 14:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=116714 00:34:19.968 14:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:19.968 14:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:19.968 14:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:19.968 14:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:19.968 14:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:19.968 14:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:19.968 14:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:19.968 14:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:19.968 14:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:19.968 14:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 116714 -w 256 00:34:19.968 14:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:20.230 14:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 116729 root 20 0 20.1t 180488 102528 S 0.0 1.5 0:00.00 reactor_1' 00:34:20.230 14:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:20.230 14:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 116729 root 20 0 20.1t 180488 102528 S 0.0 1.5 0:00.00 reactor_1 00:34:20.230 14:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:20.230 14:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:20.230 14:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:20.230 14:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:20.230 14:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:20.230 14:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:20.230 14:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:20.230 14:38:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:34:20.230 14:38:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=116786 00:34:20.230 14:38:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:34:20.230 14:38:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:20.230 14:38:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:34:20.230 14:38:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 116714 0 00:34:20.230 14:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 116714 0 busy 00:34:20.230 14:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=116714 00:34:20.230 14:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:20.230 14:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:34:20.230 14:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:34:20.230 14:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:20.230 14:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:34:20.230 14:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:20.230 14:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:20.230 14:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:20.230 14:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 116714 -w 256 00:34:20.230 14:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:20.488 14:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 116714 root 20 0 20.1t 180488 102528 S 0.0 1.5 0:00.57 reactor_0' 00:34:20.488 14:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 116714 root 20 0 20.1t 180488 102528 S 0.0 1.5 0:00.57 reactor_0 00:34:20.488 14:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:20.488 14:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:20.488 14:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:20.488 14:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:20.488 14:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:34:20.488 14:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:34:20.488 14:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:34:21.426 14:38:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:34:21.426 14:38:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:21.426 14:38:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 116714 -w 256 00:34:21.426 14:38:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:21.686 14:38:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 116714 root 20 0 20.1t 193800 103552 R 99.9 1.6 0:01.94 reactor_0' 00:34:21.686 14:38:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 116714 root 20 0 20.1t 193800 103552 R 99.9 1.6 0:01.94 reactor_0 00:34:21.686 14:38:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:21.686 14:38:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:21.686 14:38:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:34:21.686 14:38:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:34:21.686 14:38:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:34:21.686 14:38:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:34:21.687 14:38:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:34:21.687 14:38:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:21.687 14:38:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:34:21.687 14:38:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:34:21.687 14:38:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 116714 1 00:34:21.687 14:38:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 116714 1 busy 00:34:21.687 14:38:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=116714 00:34:21.687 14:38:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:21.687 14:38:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:34:21.687 14:38:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:34:21.687 14:38:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:21.687 14:38:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:34:21.687 14:38:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:21.687 14:38:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:21.687 14:38:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:21.687 14:38:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 116714 -w 256 00:34:21.687 14:38:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:21.687 14:38:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 116729 root 20 0 20.1t 194056 103552 D 60.0 1.6 0:00.79 reactor_1' 00:34:21.687 14:38:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:21.687 14:38:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 116729 root 20 0 20.1t 194056 103552 D 60.0 1.6 0:00.79 reactor_1 00:34:21.687 14:38:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:21.687 14:38:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=60.0 00:34:21.687 14:38:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=60 00:34:21.687 14:38:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:34:21.687 14:38:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:34:21.687 14:38:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:34:21.687 14:38:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:21.687 14:38:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 116786 00:34:31.678 Initializing NVMe Controllers 00:34:31.678 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:34:31.678 Controller IO queue size 256, less than required. 00:34:31.678 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:31.678 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:34:31.678 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:34:31.678 Initialization complete. Launching workers. 00:34:31.678 ======================================================== 00:34:31.678 Latency(us) 00:34:31.678 Device Information : IOPS MiB/s Average min max 00:34:31.678 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 7689.10 30.04 33339.60 7377.22 54880.76 00:34:31.678 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 8317.70 32.49 30810.10 9849.51 61337.21 00:34:31.678 ======================================================== 00:34:31.678 Total : 16006.80 62.53 32025.18 7377.22 61337.21 00:34:31.678 00:34:31.678 14:38:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:34:31.678 14:38:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 116714 0 00:34:31.678 14:38:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 116714 0 idle 00:34:31.678 14:38:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=116714 00:34:31.678 14:38:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:31.678 14:38:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:31.678 14:38:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:31.678 14:38:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:31.678 14:38:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:31.678 14:38:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:31.678 14:38:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:31.678 14:38:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:31.678 14:38:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:31.678 14:38:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 116714 -w 256 00:34:31.678 14:38:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:31.678 14:38:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 116714 root 20 0 20.1t 194312 103552 S 0.0 1.6 0:13.60 reactor_0' 00:34:31.678 14:38:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 116714 root 20 0 20.1t 194312 103552 S 0.0 1.6 0:13.60 reactor_0 00:34:31.678 14:38:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:31.678 14:38:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:31.678 14:38:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:31.678 14:38:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:31.678 14:38:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:31.678 14:38:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:31.678 14:38:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:31.678 14:38:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:31.678 14:38:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:34:31.678 14:38:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 116714 1 00:34:31.678 14:38:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 116714 1 idle 00:34:31.678 14:38:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=116714 00:34:31.678 14:38:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:31.678 14:38:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:31.678 14:38:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:31.678 14:38:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:31.678 14:38:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:31.678 14:38:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:31.678 14:38:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:31.678 14:38:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:31.678 14:38:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:31.678 14:38:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 116714 -w 256 00:34:31.678 14:38:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:31.678 14:38:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 116729 root 20 0 20.1t 194312 103552 S 0.0 1.6 0:06.50 reactor_1' 00:34:31.678 14:38:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:31.678 14:38:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 116729 root 20 0 20.1t 194312 103552 S 0.0 1.6 0:06.50 reactor_1 00:34:31.678 14:38:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:31.678 14:38:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:31.678 14:38:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:31.678 14:38:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:31.678 14:38:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:31.678 14:38:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:31.678 14:38:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:31.678 14:38:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid=9ca3beaa-4765-4780-96a0-c04fd97e7435 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:34:31.678 14:38:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:34:31.678 14:38:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # local i=0 00:34:31.678 14:38:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:34:31.678 14:38:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:34:31.678 14:38:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # sleep 2 00:34:33.063 14:39:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:34:33.063 14:39:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:34:33.063 14:39:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:34:33.063 14:39:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:34:33.063 14:39:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:34:33.063 14:39:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # return 0 00:34:33.063 14:39:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:34:33.063 14:39:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 116714 0 00:34:33.063 14:39:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 116714 0 idle 00:34:33.063 14:39:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=116714 00:34:33.063 14:39:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:33.063 14:39:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:33.063 14:39:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:33.063 14:39:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:33.063 14:39:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:33.063 14:39:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:33.063 14:39:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:33.063 14:39:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:33.063 14:39:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:33.063 14:39:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 116714 -w 256 00:34:33.063 14:39:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:33.063 14:39:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 116714 root 20 0 20.1t 199944 105216 S 0.0 1.6 0:13.66 reactor_0' 00:34:33.063 14:39:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 116714 root 20 0 20.1t 199944 105216 S 0.0 1.6 0:13.66 reactor_0 00:34:33.063 14:39:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:33.063 14:39:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:33.063 14:39:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:33.063 14:39:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:33.063 14:39:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:33.063 14:39:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:33.063 14:39:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:33.063 14:39:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:33.063 14:39:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:34:33.063 14:39:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 116714 1 00:34:33.063 14:39:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 116714 1 idle 00:34:33.063 14:39:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=116714 00:34:33.063 14:39:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:33.063 14:39:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:33.063 14:39:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:33.063 14:39:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:33.063 14:39:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:33.063 14:39:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:33.063 14:39:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:33.063 14:39:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:33.063 14:39:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:33.063 14:39:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 116714 -w 256 00:34:33.063 14:39:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:33.324 14:39:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 116729 root 20 0 20.1t 199944 105216 S 0.0 1.6 0:06.52 reactor_1' 00:34:33.324 14:39:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 116729 root 20 0 20.1t 199944 105216 S 0.0 1.6 0:06.52 reactor_1 00:34:33.324 14:39:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:33.324 14:39:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:33.324 14:39:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:33.324 14:39:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:33.324 14:39:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:33.324 14:39:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:33.324 14:39:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:33.324 14:39:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:33.324 14:39:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:33.324 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:33.324 14:39:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:33.324 14:39:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1221 -- # local i=0 00:34:33.324 14:39:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:33.324 14:39:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:34:33.324 14:39:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:33.324 14:39:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:34:33.324 14:39:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1233 -- # return 0 00:34:33.324 14:39:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:34:33.324 14:39:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:34:33.324 14:39:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:33.324 14:39:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:34:33.585 14:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:33.585 14:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:34:33.585 14:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:33.585 14:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:33.585 rmmod nvme_tcp 00:34:33.585 rmmod nvme_fabrics 00:34:33.585 rmmod nvme_keyring 00:34:33.585 14:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:33.585 14:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:34:33.585 14:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:34:33.585 14:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 116714 ']' 00:34:33.585 14:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 116714 00:34:33.585 14:39:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@952 -- # '[' -z 116714 ']' 00:34:33.585 14:39:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # kill -0 116714 00:34:33.585 14:39:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@957 -- # uname 00:34:33.585 14:39:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:33.585 14:39:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 116714 00:34:33.845 14:39:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:34:33.845 killing process with pid 116714 00:34:33.845 14:39:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:34:33.845 14:39:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@970 -- # echo 'killing process with pid 116714' 00:34:33.845 14:39:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@971 -- # kill 116714 00:34:33.845 14:39:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@976 -- # wait 116714 00:34:34.788 14:39:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:34.788 14:39:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:34.788 14:39:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:34.788 14:39:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:34:34.788 14:39:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:34:34.788 14:39:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:34:34.788 14:39:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:34.788 14:39:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:34.788 14:39:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:34:34.788 14:39:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:34:34.788 14:39:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:34:34.788 14:39:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:34:34.788 14:39:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:34:34.788 14:39:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:34:34.788 14:39:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:34:34.788 14:39:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:34:34.788 14:39:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:34:34.788 14:39:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:34:34.788 14:39:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:34:34.788 14:39:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:34:34.788 14:39:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:34:34.788 14:39:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:34:34.788 14:39:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@246 -- # remove_spdk_ns 00:34:34.788 14:39:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:34.788 14:39:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:34.788 14:39:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:34.788 14:39:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@300 -- # return 0 00:34:34.788 00:34:34.788 real 0m16.618s 00:34:34.788 user 0m30.620s 00:34:34.788 sys 0m5.472s 00:34:34.788 14:39:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:34.788 ************************************ 00:34:34.788 END TEST nvmf_interrupt 00:34:34.788 ************************************ 00:34:34.788 14:39:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:35.048 00:34:35.048 real 27m42.858s 00:34:35.048 user 81m22.383s 00:34:35.048 sys 4m44.142s 00:34:35.048 ************************************ 00:34:35.048 END TEST nvmf_tcp 00:34:35.048 ************************************ 00:34:35.048 14:39:02 nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:35.049 14:39:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:35.049 14:39:02 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:34:35.049 14:39:02 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:35.049 14:39:02 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:34:35.049 14:39:02 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:35.049 14:39:02 -- common/autotest_common.sh@10 -- # set +x 00:34:35.049 ************************************ 00:34:35.049 START TEST spdkcli_nvmf_tcp 00:34:35.049 ************************************ 00:34:35.049 14:39:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:35.049 * Looking for test storage... 00:34:35.049 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:34:35.049 14:39:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:35.049 14:39:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:34:35.049 14:39:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:35.049 14:39:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:35.049 14:39:02 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:35.049 14:39:02 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:35.049 14:39:02 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:35.049 14:39:02 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:34:35.049 14:39:02 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:34:35.049 14:39:02 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:34:35.049 14:39:02 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:34:35.049 14:39:02 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:34:35.049 14:39:02 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:34:35.049 14:39:02 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:34:35.049 14:39:02 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:35.049 14:39:02 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:34:35.049 14:39:02 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:34:35.049 14:39:02 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:35.049 14:39:02 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:35.049 14:39:02 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:34:35.049 14:39:02 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:34:35.049 14:39:02 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:35.049 14:39:02 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:34:35.049 14:39:02 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:34:35.049 14:39:02 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:34:35.049 14:39:02 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:34:35.049 14:39:02 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:35.049 14:39:02 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:34:35.049 14:39:02 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:34:35.049 14:39:02 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:35.049 14:39:02 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:35.049 14:39:02 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:34:35.049 14:39:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:35.049 14:39:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:35.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:35.049 --rc genhtml_branch_coverage=1 00:34:35.049 --rc genhtml_function_coverage=1 00:34:35.049 --rc genhtml_legend=1 00:34:35.049 --rc geninfo_all_blocks=1 00:34:35.049 --rc geninfo_unexecuted_blocks=1 00:34:35.049 00:34:35.049 ' 00:34:35.049 14:39:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:35.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:35.049 --rc genhtml_branch_coverage=1 00:34:35.049 --rc genhtml_function_coverage=1 00:34:35.049 --rc genhtml_legend=1 00:34:35.049 --rc geninfo_all_blocks=1 00:34:35.049 --rc geninfo_unexecuted_blocks=1 00:34:35.049 00:34:35.049 ' 00:34:35.049 14:39:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:35.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:35.049 --rc genhtml_branch_coverage=1 00:34:35.049 --rc genhtml_function_coverage=1 00:34:35.049 --rc genhtml_legend=1 00:34:35.049 --rc geninfo_all_blocks=1 00:34:35.049 --rc geninfo_unexecuted_blocks=1 00:34:35.049 00:34:35.049 ' 00:34:35.049 14:39:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:35.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:35.049 --rc genhtml_branch_coverage=1 00:34:35.049 --rc genhtml_function_coverage=1 00:34:35.049 --rc genhtml_legend=1 00:34:35.049 --rc geninfo_all_blocks=1 00:34:35.049 --rc geninfo_unexecuted_blocks=1 00:34:35.049 00:34:35.049 ' 00:34:35.049 14:39:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:34:35.049 14:39:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:34:35.049 14:39:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:34:35.049 14:39:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:34:35.049 14:39:02 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:34:35.049 14:39:02 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:35.049 14:39:02 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:35.049 14:39:02 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:35.049 14:39:02 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:35.049 14:39:02 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:35.049 14:39:02 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:35.049 14:39:02 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:35.049 14:39:02 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:35.049 14:39:02 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:35.049 14:39:02 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:35.049 14:39:02 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:34:35.309 14:39:02 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:34:35.309 14:39:02 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:35.309 14:39:02 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:35.310 14:39:02 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:34:35.310 14:39:02 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:35.310 14:39:02 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:35.310 14:39:02 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:34:35.310 14:39:02 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:35.310 14:39:02 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:35.310 14:39:02 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:35.310 14:39:02 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.310 14:39:02 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.310 14:39:02 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.310 14:39:02 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:34:35.310 14:39:02 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.310 14:39:02 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:34:35.310 14:39:02 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:35.310 14:39:02 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:35.310 14:39:02 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:35.310 14:39:02 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:35.310 14:39:02 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:35.310 14:39:02 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:35.310 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:35.310 14:39:02 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:35.310 14:39:02 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:35.310 14:39:02 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:35.310 14:39:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:34:35.310 14:39:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:34:35.310 14:39:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:34:35.310 14:39:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:34:35.310 14:39:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:35.310 14:39:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:35.310 14:39:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:34:35.310 14:39:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=117136 00:34:35.310 14:39:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 117136 00:34:35.310 14:39:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # '[' -z 117136 ']' 00:34:35.310 14:39:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:35.310 14:39:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:34:35.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:35.310 14:39:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:35.310 14:39:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:35.310 14:39:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:35.310 14:39:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:35.310 [2024-11-06 14:39:02.773045] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:34:35.310 [2024-11-06 14:39:02.773307] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117136 ] 00:34:35.310 [2024-11-06 14:39:02.932771] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:35.571 [2024-11-06 14:39:03.086809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:35.571 [2024-11-06 14:39:03.086925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:36.144 14:39:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:36.144 14:39:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@866 -- # return 0 00:34:36.144 14:39:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:34:36.144 14:39:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:36.144 14:39:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:36.144 14:39:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:34:36.144 14:39:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:34:36.144 14:39:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:34:36.144 14:39:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:36.144 14:39:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:36.144 14:39:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:34:36.144 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:34:36.144 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:34:36.144 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:34:36.144 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:34:36.144 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:34:36.144 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:34:36.144 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:36.144 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:34:36.144 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:34:36.144 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:36.144 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:36.144 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:34:36.144 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:36.144 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:36.144 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:34:36.144 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:36.144 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:36.144 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:36.144 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:36.144 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:34:36.144 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:34:36.144 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:36.144 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:34:36.144 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:36.144 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:34:36.144 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:34:36.144 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:34:36.144 ' 00:34:39.445 [2024-11-06 14:39:06.467601] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:40.379 [2024-11-06 14:39:07.738312] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:34:42.907 [2024-11-06 14:39:10.088030] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:34:44.807 [2024-11-06 14:39:12.117393] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:34:46.182 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:34:46.182 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:34:46.182 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:34:46.182 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:34:46.182 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:34:46.182 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:34:46.182 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:34:46.183 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:46.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:34:46.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:34:46.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:46.183 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:46.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:34:46.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:46.183 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:46.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:34:46.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:46.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:46.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:46.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:46.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:34:46.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:34:46.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:46.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:34:46.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:46.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:34:46.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:34:46.183 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:34:46.183 14:39:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:34:46.183 14:39:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:46.183 14:39:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:46.183 14:39:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:34:46.183 14:39:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:46.183 14:39:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:46.441 14:39:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:34:46.441 14:39:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:34:46.699 14:39:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:34:46.699 14:39:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:34:46.699 14:39:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:34:46.699 14:39:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:46.699 14:39:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:46.699 14:39:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:34:46.699 14:39:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:46.699 14:39:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:46.699 14:39:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:34:46.699 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:34:46.699 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:46.699 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:34:46.699 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:34:46.699 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:34:46.699 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:34:46.699 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:46.699 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:34:46.699 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:34:46.699 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:34:46.699 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:34:46.699 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:34:46.699 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:34:46.699 ' 00:34:53.259 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:34:53.259 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:34:53.259 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:53.259 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:34:53.259 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:34:53.259 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:34:53.259 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:34:53.259 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:53.259 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:34:53.259 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:34:53.259 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:34:53.259 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:34:53.259 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:34:53.259 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:34:53.259 14:39:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:34:53.259 14:39:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:53.259 14:39:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:53.259 14:39:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 117136 00:34:53.259 14:39:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' -z 117136 ']' 00:34:53.259 14:39:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # kill -0 117136 00:34:53.259 14:39:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # uname 00:34:53.259 14:39:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:53.259 14:39:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 117136 00:34:53.259 killing process with pid 117136 00:34:53.259 14:39:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:34:53.259 14:39:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:34:53.259 14:39:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 117136' 00:34:53.259 14:39:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@971 -- # kill 117136 00:34:53.259 14:39:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@976 -- # wait 117136 00:34:53.259 14:39:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:34:53.259 14:39:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:34:53.259 14:39:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 117136 ']' 00:34:53.259 14:39:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 117136 00:34:53.259 14:39:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' -z 117136 ']' 00:34:53.259 14:39:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # kill -0 117136 00:34:53.259 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (117136) - No such process 00:34:53.259 Process with pid 117136 is not found 00:34:53.259 14:39:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@979 -- # echo 'Process with pid 117136 is not found' 00:34:53.259 14:39:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:34:53.259 14:39:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:34:53.259 14:39:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:34:53.259 ************************************ 00:34:53.259 END TEST spdkcli_nvmf_tcp 00:34:53.259 ************************************ 00:34:53.259 00:34:53.259 real 0m17.979s 00:34:53.259 user 0m37.769s 00:34:53.259 sys 0m0.942s 00:34:53.259 14:39:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:53.259 14:39:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:53.259 14:39:20 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:53.259 14:39:20 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:34:53.259 14:39:20 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:53.259 14:39:20 -- common/autotest_common.sh@10 -- # set +x 00:34:53.259 ************************************ 00:34:53.259 START TEST nvmf_identify_passthru 00:34:53.259 ************************************ 00:34:53.259 14:39:20 nvmf_identify_passthru -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:53.259 * Looking for test storage... 00:34:53.259 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:34:53.260 14:39:20 nvmf_identify_passthru -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:53.260 14:39:20 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lcov --version 00:34:53.260 14:39:20 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:53.260 14:39:20 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:53.260 14:39:20 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:53.260 14:39:20 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:53.260 14:39:20 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:53.260 14:39:20 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:34:53.260 14:39:20 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:34:53.260 14:39:20 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:34:53.260 14:39:20 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:34:53.260 14:39:20 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:34:53.260 14:39:20 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:34:53.260 14:39:20 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:34:53.260 14:39:20 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:53.260 14:39:20 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:34:53.260 14:39:20 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:34:53.260 14:39:20 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:53.260 14:39:20 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:53.260 14:39:20 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:34:53.260 14:39:20 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:34:53.260 14:39:20 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:53.260 14:39:20 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:34:53.260 14:39:20 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:34:53.260 14:39:20 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:34:53.260 14:39:20 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:34:53.260 14:39:20 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:53.260 14:39:20 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:34:53.260 14:39:20 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:34:53.260 14:39:20 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:53.260 14:39:20 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:53.260 14:39:20 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:34:53.260 14:39:20 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:53.260 14:39:20 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:53.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:53.260 --rc genhtml_branch_coverage=1 00:34:53.260 --rc genhtml_function_coverage=1 00:34:53.260 --rc genhtml_legend=1 00:34:53.260 --rc geninfo_all_blocks=1 00:34:53.260 --rc geninfo_unexecuted_blocks=1 00:34:53.260 00:34:53.260 ' 00:34:53.260 14:39:20 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:53.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:53.260 --rc genhtml_branch_coverage=1 00:34:53.260 --rc genhtml_function_coverage=1 00:34:53.260 --rc genhtml_legend=1 00:34:53.260 --rc geninfo_all_blocks=1 00:34:53.260 --rc geninfo_unexecuted_blocks=1 00:34:53.260 00:34:53.260 ' 00:34:53.260 14:39:20 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:53.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:53.260 --rc genhtml_branch_coverage=1 00:34:53.260 --rc genhtml_function_coverage=1 00:34:53.260 --rc genhtml_legend=1 00:34:53.260 --rc geninfo_all_blocks=1 00:34:53.260 --rc geninfo_unexecuted_blocks=1 00:34:53.260 00:34:53.260 ' 00:34:53.260 14:39:20 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:53.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:53.260 --rc genhtml_branch_coverage=1 00:34:53.260 --rc genhtml_function_coverage=1 00:34:53.260 --rc genhtml_legend=1 00:34:53.260 --rc geninfo_all_blocks=1 00:34:53.260 --rc geninfo_unexecuted_blocks=1 00:34:53.260 00:34:53.260 ' 00:34:53.260 14:39:20 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:34:53.260 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:34:53.260 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:53.260 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:53.260 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:53.260 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:53.260 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:53.260 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:53.260 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:53.260 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:53.260 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:53.260 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:53.260 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:34:53.260 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:34:53.260 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:53.260 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:53.260 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:34:53.260 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:53.260 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:53.260 14:39:20 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:34:53.260 14:39:20 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:53.260 14:39:20 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:53.260 14:39:20 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:53.260 14:39:20 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.260 14:39:20 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.260 14:39:20 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.260 14:39:20 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:53.260 14:39:20 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.260 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:34:53.260 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:53.260 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:53.260 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:53.260 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:53.260 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:53.260 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:53.260 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:53.260 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:53.260 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:53.260 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:53.260 14:39:20 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:53.260 14:39:20 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:34:53.260 14:39:20 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:53.260 14:39:20 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:53.260 14:39:20 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:53.260 14:39:20 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.260 14:39:20 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.260 14:39:20 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.260 14:39:20 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:53.261 14:39:20 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.261 14:39:20 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:34:53.261 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:53.261 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:53.261 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:53.261 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:53.261 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:53.261 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:53.261 14:39:20 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:53.261 14:39:20 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:53.261 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:34:53.261 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:34:53.261 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:34:53.261 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:34:53.261 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:34:53.261 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@460 -- # nvmf_veth_init 00:34:53.261 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:53.261 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:34:53.261 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:34:53.261 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:34:53.261 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:53.261 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:34:53.261 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:34:53.261 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:34:53.261 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:34:53.261 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:34:53.261 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:34:53.261 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:53.261 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:34:53.261 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:34:53.261 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:34:53.261 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:34:53.261 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:34:53.261 Cannot find device "nvmf_init_br" 00:34:53.261 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@162 -- # true 00:34:53.261 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:34:53.261 Cannot find device "nvmf_init_br2" 00:34:53.261 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@163 -- # true 00:34:53.261 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:34:53.261 Cannot find device "nvmf_tgt_br" 00:34:53.261 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@164 -- # true 00:34:53.261 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:34:53.261 Cannot find device "nvmf_tgt_br2" 00:34:53.261 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@165 -- # true 00:34:53.261 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:34:53.261 Cannot find device "nvmf_init_br" 00:34:53.261 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@166 -- # true 00:34:53.261 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:34:53.261 Cannot find device "nvmf_init_br2" 00:34:53.261 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@167 -- # true 00:34:53.261 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:34:53.261 Cannot find device "nvmf_tgt_br" 00:34:53.261 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@168 -- # true 00:34:53.261 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:34:53.261 Cannot find device "nvmf_tgt_br2" 00:34:53.261 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@169 -- # true 00:34:53.261 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:34:53.261 Cannot find device "nvmf_br" 00:34:53.261 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@170 -- # true 00:34:53.261 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:34:53.261 Cannot find device "nvmf_init_if" 00:34:53.261 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@171 -- # true 00:34:53.261 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:34:53.261 Cannot find device "nvmf_init_if2" 00:34:53.261 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@172 -- # true 00:34:53.261 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:34:53.261 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:53.261 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@173 -- # true 00:34:53.261 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:34:53.261 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:53.261 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@174 -- # true 00:34:53.261 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:34:53.261 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:34:53.261 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:34:53.261 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:34:53.261 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:34:53.261 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:34:53.261 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:34:53.522 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:34:53.522 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:34:53.522 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:34:53.522 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:34:53.522 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:34:53.522 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:34:53.522 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:34:53.522 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:34:53.522 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:34:53.522 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:34:53.523 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:34:53.523 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:34:53.523 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:34:53.523 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:34:53.523 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:34:53.523 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:34:53.523 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:34:53.523 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:34:53.523 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:34:53.523 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:34:53.523 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:34:53.523 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:34:53.523 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:34:53.523 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:34:53.523 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:34:53.523 14:39:20 nvmf_identify_passthru -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:34:53.523 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:34:53.523 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.110 ms 00:34:53.523 00:34:53.523 --- 10.0.0.3 ping statistics --- 00:34:53.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:53.523 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:34:53.523 14:39:21 nvmf_identify_passthru -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:34:53.523 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:34:53.523 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:34:53.523 00:34:53.523 --- 10.0.0.4 ping statistics --- 00:34:53.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:53.523 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:34:53.523 14:39:21 nvmf_identify_passthru -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:34:53.523 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:53.523 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:34:53.523 00:34:53.523 --- 10.0.0.1 ping statistics --- 00:34:53.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:53.523 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:34:53.523 14:39:21 nvmf_identify_passthru -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:34:53.523 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:53.523 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:34:53.523 00:34:53.523 --- 10.0.0.2 ping statistics --- 00:34:53.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:53.523 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:34:53.523 14:39:21 nvmf_identify_passthru -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:53.523 14:39:21 nvmf_identify_passthru -- nvmf/common.sh@461 -- # return 0 00:34:53.523 14:39:21 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:53.523 14:39:21 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:53.523 14:39:21 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:53.523 14:39:21 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:53.523 14:39:21 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:53.523 14:39:21 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:53.523 14:39:21 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:53.523 14:39:21 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:34:53.523 14:39:21 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:53.523 14:39:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:53.523 14:39:21 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:34:53.523 14:39:21 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:34:53.523 14:39:21 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:34:53.523 14:39:21 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:34:53.523 14:39:21 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:34:53.523 14:39:21 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:34:53.523 14:39:21 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:34:53.523 14:39:21 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:34:53.523 14:39:21 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:34:53.523 14:39:21 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:34:53.523 14:39:21 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:34:53.523 14:39:21 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:34:53.523 14:39:21 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:34:53.523 14:39:21 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:00:10.0 00:34:53.523 14:39:21 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:10.0 ']' 00:34:53.523 14:39:21 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:34:53.523 14:39:21 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:34:53.523 14:39:21 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:34:53.785 14:39:21 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:34:53.785 14:39:21 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:34:53.785 14:39:21 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:34:53.785 14:39:21 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:34:54.045 14:39:21 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:34:54.045 14:39:21 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:34:54.045 14:39:21 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:54.045 14:39:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:54.307 14:39:21 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:34:54.307 14:39:21 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:54.307 14:39:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:54.307 14:39:21 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=117656 00:34:54.307 14:39:21 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:54.307 14:39:21 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 117656 00:34:54.307 14:39:21 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # '[' -z 117656 ']' 00:34:54.307 14:39:21 nvmf_identify_passthru -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:54.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:54.307 14:39:21 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:54.307 14:39:21 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:54.307 14:39:21 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:54.307 14:39:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:54.307 14:39:21 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:34:54.307 [2024-11-06 14:39:21.807988] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:34:54.307 [2024-11-06 14:39:21.808267] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:54.568 [2024-11-06 14:39:21.979985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:54.568 [2024-11-06 14:39:22.128156] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:54.568 [2024-11-06 14:39:22.128578] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:54.568 [2024-11-06 14:39:22.128603] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:54.568 [2024-11-06 14:39:22.128613] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:54.568 [2024-11-06 14:39:22.128625] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:54.568 [2024-11-06 14:39:22.130863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:54.568 [2024-11-06 14:39:22.131210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:54.568 [2024-11-06 14:39:22.131507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:54.568 [2024-11-06 14:39:22.131599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:55.141 14:39:22 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:55.141 14:39:22 nvmf_identify_passthru -- common/autotest_common.sh@866 -- # return 0 00:34:55.141 14:39:22 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:34:55.141 14:39:22 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:55.141 14:39:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:55.141 14:39:22 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:55.141 14:39:22 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:34:55.141 14:39:22 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:55.141 14:39:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:55.403 [2024-11-06 14:39:22.969591] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:34:55.403 14:39:22 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:55.403 14:39:22 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:55.403 14:39:22 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:55.403 14:39:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:55.403 [2024-11-06 14:39:22.985961] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:55.403 14:39:22 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:55.403 14:39:22 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:34:55.403 14:39:22 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:55.403 14:39:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:55.665 14:39:23 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:34:55.665 14:39:23 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:55.665 14:39:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:55.665 Nvme0n1 00:34:55.665 14:39:23 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:55.665 14:39:23 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:34:55.665 14:39:23 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:55.665 14:39:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:55.665 14:39:23 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:55.665 14:39:23 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:34:55.665 14:39:23 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:55.665 14:39:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:55.665 14:39:23 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:55.665 14:39:23 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:34:55.665 14:39:23 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:55.665 14:39:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:55.665 [2024-11-06 14:39:23.143983] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:34:55.665 14:39:23 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:55.665 14:39:23 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:34:55.665 14:39:23 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:55.665 14:39:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:55.665 [ 00:34:55.665 { 00:34:55.665 "allow_any_host": true, 00:34:55.665 "hosts": [], 00:34:55.665 "listen_addresses": [], 00:34:55.665 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:34:55.665 "subtype": "Discovery" 00:34:55.665 }, 00:34:55.665 { 00:34:55.665 "allow_any_host": true, 00:34:55.665 "hosts": [], 00:34:55.665 "listen_addresses": [ 00:34:55.665 { 00:34:55.665 "adrfam": "IPv4", 00:34:55.665 "traddr": "10.0.0.3", 00:34:55.665 "trsvcid": "4420", 00:34:55.665 "trtype": "TCP" 00:34:55.665 } 00:34:55.665 ], 00:34:55.665 "max_cntlid": 65519, 00:34:55.665 "max_namespaces": 1, 00:34:55.665 "min_cntlid": 1, 00:34:55.665 "model_number": "SPDK bdev Controller", 00:34:55.665 "namespaces": [ 00:34:55.665 { 00:34:55.665 "bdev_name": "Nvme0n1", 00:34:55.665 "name": "Nvme0n1", 00:34:55.665 "nguid": "9B77188414D5468FA368A76AC81ED6E5", 00:34:55.665 "nsid": 1, 00:34:55.665 "uuid": "9b771884-14d5-468f-a368-a76ac81ed6e5" 00:34:55.665 } 00:34:55.665 ], 00:34:55.665 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:34:55.665 "serial_number": "SPDK00000000000001", 00:34:55.665 "subtype": "NVMe" 00:34:55.665 } 00:34:55.665 ] 00:34:55.665 14:39:23 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:55.665 14:39:23 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:34:55.665 14:39:23 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:55.665 14:39:23 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:34:55.927 14:39:23 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:34:55.927 14:39:23 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:34:55.927 14:39:23 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:55.927 14:39:23 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:34:56.188 14:39:23 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:34:56.188 14:39:23 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:34:56.188 14:39:23 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:34:56.188 14:39:23 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:56.188 14:39:23 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:56.188 14:39:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:56.188 14:39:23 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:56.188 14:39:23 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:34:56.188 14:39:23 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:34:56.188 14:39:23 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:56.188 14:39:23 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:34:56.449 14:39:23 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:56.449 14:39:23 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:34:56.449 14:39:23 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:56.449 14:39:23 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:56.449 rmmod nvme_tcp 00:34:56.449 rmmod nvme_fabrics 00:34:56.449 rmmod nvme_keyring 00:34:56.449 14:39:23 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:56.449 14:39:23 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:34:56.449 14:39:23 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:34:56.449 14:39:23 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 117656 ']' 00:34:56.449 14:39:23 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 117656 00:34:56.449 14:39:23 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # '[' -z 117656 ']' 00:34:56.449 14:39:23 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # kill -0 117656 00:34:56.449 14:39:23 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # uname 00:34:56.449 14:39:23 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:56.449 14:39:23 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 117656 00:34:56.449 killing process with pid 117656 00:34:56.449 14:39:24 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:34:56.449 14:39:24 nvmf_identify_passthru -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:34:56.449 14:39:24 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # echo 'killing process with pid 117656' 00:34:56.449 14:39:24 nvmf_identify_passthru -- common/autotest_common.sh@971 -- # kill 117656 00:34:56.449 14:39:24 nvmf_identify_passthru -- common/autotest_common.sh@976 -- # wait 117656 00:34:57.388 14:39:24 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:57.388 14:39:24 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:57.388 14:39:24 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:57.388 14:39:24 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:34:57.388 14:39:24 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:34:57.388 14:39:24 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:57.388 14:39:24 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:34:57.388 14:39:24 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:57.388 14:39:24 nvmf_identify_passthru -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:34:57.388 14:39:24 nvmf_identify_passthru -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:34:57.388 14:39:24 nvmf_identify_passthru -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:34:57.388 14:39:24 nvmf_identify_passthru -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:34:57.388 14:39:24 nvmf_identify_passthru -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:34:57.388 14:39:24 nvmf_identify_passthru -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:34:57.388 14:39:24 nvmf_identify_passthru -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:34:57.388 14:39:24 nvmf_identify_passthru -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:34:57.388 14:39:24 nvmf_identify_passthru -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:34:57.388 14:39:24 nvmf_identify_passthru -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:34:57.388 14:39:24 nvmf_identify_passthru -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:34:57.388 14:39:24 nvmf_identify_passthru -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:34:57.388 14:39:24 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:34:57.388 14:39:24 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:34:57.388 14:39:24 nvmf_identify_passthru -- nvmf/common.sh@246 -- # remove_spdk_ns 00:34:57.388 14:39:24 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:57.388 14:39:24 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:57.388 14:39:24 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:57.388 14:39:25 nvmf_identify_passthru -- nvmf/common.sh@300 -- # return 0 00:34:57.388 00:34:57.388 real 0m4.472s 00:34:57.388 user 0m10.723s 00:34:57.388 sys 0m1.171s 00:34:57.388 14:39:25 nvmf_identify_passthru -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:57.388 14:39:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:57.388 ************************************ 00:34:57.388 END TEST nvmf_identify_passthru 00:34:57.388 ************************************ 00:34:57.649 14:39:25 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:34:57.649 14:39:25 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:34:57.649 14:39:25 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:57.649 14:39:25 -- common/autotest_common.sh@10 -- # set +x 00:34:57.649 ************************************ 00:34:57.649 START TEST nvmf_dif 00:34:57.649 ************************************ 00:34:57.649 14:39:25 nvmf_dif -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:34:57.649 * Looking for test storage... 00:34:57.649 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:34:57.649 14:39:25 nvmf_dif -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:57.649 14:39:25 nvmf_dif -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:57.649 14:39:25 nvmf_dif -- common/autotest_common.sh@1691 -- # lcov --version 00:34:57.649 14:39:25 nvmf_dif -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:57.649 14:39:25 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:57.649 14:39:25 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:57.649 14:39:25 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:57.649 14:39:25 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:34:57.649 14:39:25 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:34:57.649 14:39:25 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:34:57.649 14:39:25 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:34:57.649 14:39:25 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:34:57.649 14:39:25 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:34:57.649 14:39:25 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:34:57.649 14:39:25 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:57.649 14:39:25 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:34:57.649 14:39:25 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:34:57.649 14:39:25 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:57.649 14:39:25 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:57.649 14:39:25 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:34:57.649 14:39:25 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:34:57.649 14:39:25 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:57.649 14:39:25 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:34:57.649 14:39:25 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:34:57.649 14:39:25 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:34:57.649 14:39:25 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:34:57.649 14:39:25 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:57.649 14:39:25 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:34:57.649 14:39:25 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:34:57.649 14:39:25 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:57.649 14:39:25 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:57.649 14:39:25 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:34:57.649 14:39:25 nvmf_dif -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:57.649 14:39:25 nvmf_dif -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:57.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:57.649 --rc genhtml_branch_coverage=1 00:34:57.649 --rc genhtml_function_coverage=1 00:34:57.649 --rc genhtml_legend=1 00:34:57.649 --rc geninfo_all_blocks=1 00:34:57.649 --rc geninfo_unexecuted_blocks=1 00:34:57.649 00:34:57.649 ' 00:34:57.649 14:39:25 nvmf_dif -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:57.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:57.649 --rc genhtml_branch_coverage=1 00:34:57.649 --rc genhtml_function_coverage=1 00:34:57.649 --rc genhtml_legend=1 00:34:57.649 --rc geninfo_all_blocks=1 00:34:57.649 --rc geninfo_unexecuted_blocks=1 00:34:57.649 00:34:57.649 ' 00:34:57.649 14:39:25 nvmf_dif -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:57.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:57.649 --rc genhtml_branch_coverage=1 00:34:57.649 --rc genhtml_function_coverage=1 00:34:57.649 --rc genhtml_legend=1 00:34:57.649 --rc geninfo_all_blocks=1 00:34:57.649 --rc geninfo_unexecuted_blocks=1 00:34:57.649 00:34:57.649 ' 00:34:57.649 14:39:25 nvmf_dif -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:57.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:57.649 --rc genhtml_branch_coverage=1 00:34:57.649 --rc genhtml_function_coverage=1 00:34:57.649 --rc genhtml_legend=1 00:34:57.649 --rc geninfo_all_blocks=1 00:34:57.649 --rc geninfo_unexecuted_blocks=1 00:34:57.649 00:34:57.649 ' 00:34:57.649 14:39:25 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:34:57.649 14:39:25 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:34:57.649 14:39:25 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:57.649 14:39:25 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:57.649 14:39:25 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:57.649 14:39:25 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:57.649 14:39:25 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:57.649 14:39:25 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:57.649 14:39:25 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:57.649 14:39:25 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:57.649 14:39:25 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:57.649 14:39:25 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:57.649 14:39:25 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:34:57.649 14:39:25 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:34:57.649 14:39:25 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:57.649 14:39:25 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:57.649 14:39:25 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:34:57.649 14:39:25 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:57.649 14:39:25 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:57.649 14:39:25 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:34:57.649 14:39:25 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:57.649 14:39:25 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:57.649 14:39:25 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:57.650 14:39:25 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:57.650 14:39:25 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:57.650 14:39:25 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:57.650 14:39:25 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:34:57.650 14:39:25 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:57.650 14:39:25 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:34:57.650 14:39:25 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:57.650 14:39:25 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:57.650 14:39:25 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:57.650 14:39:25 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:57.650 14:39:25 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:57.650 14:39:25 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:57.650 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:57.650 14:39:25 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:57.650 14:39:25 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:57.650 14:39:25 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:57.650 14:39:25 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:34:57.650 14:39:25 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:34:57.650 14:39:25 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:34:57.650 14:39:25 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:34:57.650 14:39:25 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:34:57.650 14:39:25 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:57.650 14:39:25 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:57.650 14:39:25 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:57.650 14:39:25 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:57.650 14:39:25 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:57.650 14:39:25 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:57.650 14:39:25 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:57.650 14:39:25 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:57.650 14:39:25 nvmf_dif -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:34:57.650 14:39:25 nvmf_dif -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:34:57.650 14:39:25 nvmf_dif -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:34:57.650 14:39:25 nvmf_dif -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:34:57.650 14:39:25 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:34:57.650 14:39:25 nvmf_dif -- nvmf/common.sh@460 -- # nvmf_veth_init 00:34:57.650 14:39:25 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:57.650 14:39:25 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:34:57.650 14:39:25 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:34:57.650 14:39:25 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:34:57.650 14:39:25 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:57.650 14:39:25 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:34:57.650 14:39:25 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:34:57.650 14:39:25 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:34:57.650 14:39:25 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:34:57.650 14:39:25 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:34:57.650 14:39:25 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:34:57.650 14:39:25 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:57.650 14:39:25 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:34:57.650 14:39:25 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:34:57.650 14:39:25 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:34:57.650 14:39:25 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:34:57.650 14:39:25 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:34:57.650 Cannot find device "nvmf_init_br" 00:34:57.650 14:39:25 nvmf_dif -- nvmf/common.sh@162 -- # true 00:34:57.650 14:39:25 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:34:57.650 Cannot find device "nvmf_init_br2" 00:34:57.650 14:39:25 nvmf_dif -- nvmf/common.sh@163 -- # true 00:34:57.650 14:39:25 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:34:57.650 Cannot find device "nvmf_tgt_br" 00:34:57.650 14:39:25 nvmf_dif -- nvmf/common.sh@164 -- # true 00:34:57.650 14:39:25 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:34:57.650 Cannot find device "nvmf_tgt_br2" 00:34:57.650 14:39:25 nvmf_dif -- nvmf/common.sh@165 -- # true 00:34:57.650 14:39:25 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:34:57.911 Cannot find device "nvmf_init_br" 00:34:57.911 14:39:25 nvmf_dif -- nvmf/common.sh@166 -- # true 00:34:57.911 14:39:25 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:34:57.911 Cannot find device "nvmf_init_br2" 00:34:57.911 14:39:25 nvmf_dif -- nvmf/common.sh@167 -- # true 00:34:57.911 14:39:25 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:34:57.911 Cannot find device "nvmf_tgt_br" 00:34:57.911 14:39:25 nvmf_dif -- nvmf/common.sh@168 -- # true 00:34:57.911 14:39:25 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:34:57.911 Cannot find device "nvmf_tgt_br2" 00:34:57.911 14:39:25 nvmf_dif -- nvmf/common.sh@169 -- # true 00:34:57.911 14:39:25 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:34:57.911 Cannot find device "nvmf_br" 00:34:57.911 14:39:25 nvmf_dif -- nvmf/common.sh@170 -- # true 00:34:57.911 14:39:25 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:34:57.911 Cannot find device "nvmf_init_if" 00:34:57.911 14:39:25 nvmf_dif -- nvmf/common.sh@171 -- # true 00:34:57.911 14:39:25 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:34:57.911 Cannot find device "nvmf_init_if2" 00:34:57.911 14:39:25 nvmf_dif -- nvmf/common.sh@172 -- # true 00:34:57.911 14:39:25 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:34:57.911 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:57.911 14:39:25 nvmf_dif -- nvmf/common.sh@173 -- # true 00:34:57.911 14:39:25 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:34:57.911 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:57.911 14:39:25 nvmf_dif -- nvmf/common.sh@174 -- # true 00:34:57.911 14:39:25 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:34:57.911 14:39:25 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:34:57.911 14:39:25 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:34:57.911 14:39:25 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:34:57.911 14:39:25 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:34:57.911 14:39:25 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:34:57.911 14:39:25 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:34:57.911 14:39:25 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:34:57.911 14:39:25 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:34:57.911 14:39:25 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:34:57.911 14:39:25 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:34:57.911 14:39:25 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:34:57.911 14:39:25 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:34:57.911 14:39:25 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:34:57.911 14:39:25 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:34:57.911 14:39:25 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:34:57.911 14:39:25 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:34:57.911 14:39:25 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:34:57.911 14:39:25 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:34:57.911 14:39:25 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:34:57.911 14:39:25 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:34:57.911 14:39:25 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:34:57.911 14:39:25 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:34:57.911 14:39:25 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:34:57.911 14:39:25 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:34:57.911 14:39:25 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:34:57.911 14:39:25 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:34:57.912 14:39:25 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:34:57.912 14:39:25 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:34:57.912 14:39:25 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:34:58.173 14:39:25 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:34:58.173 14:39:25 nvmf_dif -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:34:58.173 14:39:25 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:34:58.173 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:34:58.173 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:34:58.173 00:34:58.173 --- 10.0.0.3 ping statistics --- 00:34:58.173 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:58.173 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:34:58.173 14:39:25 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:34:58.173 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:34:58.173 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:34:58.173 00:34:58.173 --- 10.0.0.4 ping statistics --- 00:34:58.173 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:58.173 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:34:58.173 14:39:25 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:34:58.173 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:58.173 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:34:58.173 00:34:58.173 --- 10.0.0.1 ping statistics --- 00:34:58.173 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:58.173 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:34:58.173 14:39:25 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:34:58.173 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:58.173 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.040 ms 00:34:58.173 00:34:58.173 --- 10.0.0.2 ping statistics --- 00:34:58.173 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:58.173 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:34:58.174 14:39:25 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:58.174 14:39:25 nvmf_dif -- nvmf/common.sh@461 -- # return 0 00:34:58.174 14:39:25 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:34:58.174 14:39:25 nvmf_dif -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:34:58.435 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:58.436 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:34:58.436 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:34:58.436 14:39:25 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:58.436 14:39:25 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:58.436 14:39:25 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:58.436 14:39:25 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:58.436 14:39:25 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:58.436 14:39:25 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:58.436 14:39:25 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:34:58.436 14:39:25 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:34:58.436 14:39:25 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:58.436 14:39:25 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:58.436 14:39:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:58.436 14:39:25 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=118066 00:34:58.436 14:39:25 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 118066 00:34:58.436 14:39:25 nvmf_dif -- common/autotest_common.sh@833 -- # '[' -z 118066 ']' 00:34:58.436 14:39:25 nvmf_dif -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:58.436 14:39:25 nvmf_dif -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:58.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:58.436 14:39:25 nvmf_dif -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:58.436 14:39:25 nvmf_dif -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:58.436 14:39:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:58.436 14:39:25 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:34:58.436 [2024-11-06 14:39:26.048828] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:34:58.436 [2024-11-06 14:39:26.048990] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:58.696 [2024-11-06 14:39:26.214354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:58.696 [2024-11-06 14:39:26.304430] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:58.696 [2024-11-06 14:39:26.304466] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:58.696 [2024-11-06 14:39:26.304476] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:58.696 [2024-11-06 14:39:26.304489] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:58.696 [2024-11-06 14:39:26.304496] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:58.696 [2024-11-06 14:39:26.305227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:59.269 14:39:26 nvmf_dif -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:59.269 14:39:26 nvmf_dif -- common/autotest_common.sh@866 -- # return 0 00:34:59.269 14:39:26 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:59.269 14:39:26 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:59.269 14:39:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:59.531 14:39:26 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:59.531 14:39:26 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:34:59.531 14:39:26 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:34:59.531 14:39:26 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:59.531 14:39:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:59.531 [2024-11-06 14:39:26.924024] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:59.531 14:39:26 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:59.531 14:39:26 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:34:59.531 14:39:26 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:34:59.531 14:39:26 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:59.531 14:39:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:59.531 ************************************ 00:34:59.531 START TEST fio_dif_1_default 00:34:59.531 ************************************ 00:34:59.531 14:39:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1127 -- # fio_dif_1 00:34:59.531 14:39:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:34:59.531 14:39:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:34:59.531 14:39:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:34:59.531 14:39:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:34:59.531 14:39:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:34:59.531 14:39:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:59.531 14:39:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:59.531 14:39:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:59.531 bdev_null0 00:34:59.531 14:39:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:59.531 14:39:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:59.531 14:39:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:59.531 14:39:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:59.531 14:39:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:59.531 14:39:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:59.531 14:39:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:59.531 14:39:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:59.531 14:39:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:59.531 14:39:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:34:59.531 14:39:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:59.531 14:39:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:59.531 [2024-11-06 14:39:26.972187] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:34:59.531 14:39:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:59.531 14:39:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:34:59.531 14:39:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:59.531 14:39:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:34:59.531 14:39:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:59.531 14:39:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:59.531 14:39:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:34:59.531 14:39:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:34:59.531 14:39:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:59.531 14:39:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:34:59.531 14:39:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local sanitizers 00:34:59.531 14:39:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:34:59.531 14:39:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:59.531 14:39:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # shift 00:34:59.531 14:39:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:59.531 { 00:34:59.531 "params": { 00:34:59.532 "name": "Nvme$subsystem", 00:34:59.532 "trtype": "$TEST_TRANSPORT", 00:34:59.532 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:59.532 "adrfam": "ipv4", 00:34:59.532 "trsvcid": "$NVMF_PORT", 00:34:59.532 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:59.532 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:59.532 "hdgst": ${hdgst:-false}, 00:34:59.532 "ddgst": ${ddgst:-false} 00:34:59.532 }, 00:34:59.532 "method": "bdev_nvme_attach_controller" 00:34:59.532 } 00:34:59.532 EOF 00:34:59.532 )") 00:34:59.532 14:39:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # local asan_lib= 00:34:59.532 14:39:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:34:59.532 14:39:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:34:59.532 14:39:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:34:59.532 14:39:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:34:59.532 14:39:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:34:59.532 14:39:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libasan 00:34:59.532 14:39:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:34:59.532 14:39:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:34:59.532 14:39:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:34:59.532 14:39:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:34:59.532 14:39:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:34:59.532 14:39:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:34:59.532 14:39:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:59.532 "params": { 00:34:59.532 "name": "Nvme0", 00:34:59.532 "trtype": "tcp", 00:34:59.532 "traddr": "10.0.0.3", 00:34:59.532 "adrfam": "ipv4", 00:34:59.532 "trsvcid": "4420", 00:34:59.532 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:59.532 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:59.532 "hdgst": false, 00:34:59.532 "ddgst": false 00:34:59.532 }, 00:34:59.532 "method": "bdev_nvme_attach_controller" 00:34:59.532 }' 00:34:59.532 14:39:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:34:59.532 14:39:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:34:59.532 14:39:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # break 00:34:59.532 14:39:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:34:59.532 14:39:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:59.793 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:59.793 fio-3.35 00:34:59.793 Starting 1 thread 00:35:12.047 00:35:12.047 filename0: (groupid=0, jobs=1): err= 0: pid=118143: Wed Nov 6 14:39:37 2024 00:35:12.047 read: IOPS=456, BW=1824KiB/s (1868kB/s)(17.8MiB/10015msec) 00:35:12.047 slat (nsec): min=5962, max=31280, avg=6716.40, stdev=1497.90 00:35:12.047 clat (usec): min=339, max=41981, avg=8750.23, stdev=16378.99 00:35:12.047 lat (usec): min=346, max=41988, avg=8756.94, stdev=16379.10 00:35:12.047 clat percentiles (usec): 00:35:12.047 | 1.00th=[ 347], 5.00th=[ 351], 10.00th=[ 355], 20.00th=[ 359], 00:35:12.047 | 30.00th=[ 363], 40.00th=[ 367], 50.00th=[ 371], 60.00th=[ 375], 00:35:12.047 | 70.00th=[ 383], 80.00th=[40633], 90.00th=[40633], 95.00th=[41157], 00:35:12.047 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[42206], 00:35:12.047 | 99.99th=[42206] 00:35:12.047 bw ( KiB/s): min= 608, max= 3200, per=100.00%, avg=1825.40, stdev=733.21, samples=20 00:35:12.047 iops : min= 152, max= 800, avg=456.35, stdev=183.30, samples=20 00:35:12.047 lat (usec) : 500=79.16%, 1000=0.09% 00:35:12.047 lat (msec) : 50=20.75% 00:35:12.047 cpu : usr=94.28%, sys=5.14%, ctx=34, majf=0, minf=1636 00:35:12.047 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:12.047 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:12.047 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:12.047 issued rwts: total=4568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:12.047 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:12.047 00:35:12.047 Run status group 0 (all jobs): 00:35:12.047 READ: bw=1824KiB/s (1868kB/s), 1824KiB/s-1824KiB/s (1868kB/s-1868kB/s), io=17.8MiB (18.7MB), run=10015-10015msec 00:35:12.047 ----------------------------------------------------- 00:35:12.047 Suppressions used: 00:35:12.047 count bytes template 00:35:12.047 1 8 /usr/src/fio/parse.c 00:35:12.047 1 8 libtcmalloc_minimal.so 00:35:12.047 1 904 libcrypto.so 00:35:12.047 ----------------------------------------------------- 00:35:12.047 00:35:12.047 14:39:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:35:12.047 14:39:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:35:12.047 14:39:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:35:12.047 14:39:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:12.047 14:39:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:35:12.047 14:39:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:12.047 14:39:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.047 14:39:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:12.047 14:39:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.047 14:39:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:12.047 14:39:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.047 14:39:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:12.047 14:39:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.047 00:35:12.047 real 0m11.848s 00:35:12.048 user 0m10.857s 00:35:12.048 sys 0m0.776s 00:35:12.048 14:39:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:12.048 14:39:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:12.048 ************************************ 00:35:12.048 END TEST fio_dif_1_default 00:35:12.048 ************************************ 00:35:12.048 14:39:38 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:35:12.048 14:39:38 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:35:12.048 14:39:38 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:12.048 14:39:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:12.048 ************************************ 00:35:12.048 START TEST fio_dif_1_multi_subsystems 00:35:12.048 ************************************ 00:35:12.048 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1127 -- # fio_dif_1_multi_subsystems 00:35:12.048 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:35:12.048 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:35:12.048 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:35:12.048 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:12.048 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:35:12.048 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:35:12.048 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:12.048 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.048 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:12.048 bdev_null0 00:35:12.048 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.048 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:12.048 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.048 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:12.048 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.048 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:12.048 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.048 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:12.048 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.048 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:35:12.048 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.048 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:12.048 [2024-11-06 14:39:38.869738] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:35:12.048 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.048 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:12.048 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:35:12.048 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:35:12.048 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:12.048 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.048 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:12.048 bdev_null1 00:35:12.048 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.048 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:12.048 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.048 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:12.048 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.048 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:12.048 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.048 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:12.048 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.048 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:35:12.048 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.048 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:12.048 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.048 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:35:12.048 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:35:12.048 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:12.048 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:12.048 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:12.048 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:35:12.048 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:35:12.048 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:12.048 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:35:12.048 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local sanitizers 00:35:12.048 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:35:12.048 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:12.048 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:35:12.048 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:12.048 { 00:35:12.048 "params": { 00:35:12.048 "name": "Nvme$subsystem", 00:35:12.048 "trtype": "$TEST_TRANSPORT", 00:35:12.048 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:12.048 "adrfam": "ipv4", 00:35:12.048 "trsvcid": "$NVMF_PORT", 00:35:12.048 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:12.048 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:12.048 "hdgst": ${hdgst:-false}, 00:35:12.048 "ddgst": ${ddgst:-false} 00:35:12.048 }, 00:35:12.048 "method": "bdev_nvme_attach_controller" 00:35:12.048 } 00:35:12.048 EOF 00:35:12.048 )") 00:35:12.048 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # shift 00:35:12.048 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:35:12.048 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # local asan_lib= 00:35:12.048 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:35:12.048 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:35:12.048 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:35:12.048 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:35:12.048 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libasan 00:35:12.048 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:35:12.048 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:35:12.048 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:12.048 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:35:12.048 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:12.048 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:12.048 { 00:35:12.048 "params": { 00:35:12.048 "name": "Nvme$subsystem", 00:35:12.048 "trtype": "$TEST_TRANSPORT", 00:35:12.048 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:12.048 "adrfam": "ipv4", 00:35:12.048 "trsvcid": "$NVMF_PORT", 00:35:12.048 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:12.048 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:12.048 "hdgst": ${hdgst:-false}, 00:35:12.048 "ddgst": ${ddgst:-false} 00:35:12.048 }, 00:35:12.048 "method": "bdev_nvme_attach_controller" 00:35:12.048 } 00:35:12.048 EOF 00:35:12.048 )") 00:35:12.048 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:35:12.048 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:12.048 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:35:12.048 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:35:12.048 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:35:12.048 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:12.048 "params": { 00:35:12.048 "name": "Nvme0", 00:35:12.048 "trtype": "tcp", 00:35:12.048 "traddr": "10.0.0.3", 00:35:12.048 "adrfam": "ipv4", 00:35:12.048 "trsvcid": "4420", 00:35:12.048 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:12.048 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:12.048 "hdgst": false, 00:35:12.048 "ddgst": false 00:35:12.048 }, 00:35:12.048 "method": "bdev_nvme_attach_controller" 00:35:12.048 },{ 00:35:12.048 "params": { 00:35:12.048 "name": "Nvme1", 00:35:12.048 "trtype": "tcp", 00:35:12.048 "traddr": "10.0.0.3", 00:35:12.048 "adrfam": "ipv4", 00:35:12.048 "trsvcid": "4420", 00:35:12.048 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:12.048 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:12.048 "hdgst": false, 00:35:12.048 "ddgst": false 00:35:12.048 }, 00:35:12.048 "method": "bdev_nvme_attach_controller" 00:35:12.048 }' 00:35:12.048 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:35:12.049 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:35:12.049 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # break 00:35:12.049 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:35:12.049 14:39:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:12.049 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:12.049 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:12.049 fio-3.35 00:35:12.049 Starting 2 threads 00:35:24.250 00:35:24.250 filename0: (groupid=0, jobs=1): err= 0: pid=118307: Wed Nov 6 14:39:49 2024 00:35:24.250 read: IOPS=277, BW=1111KiB/s (1138kB/s)(10.9MiB/10019msec) 00:35:24.250 slat (nsec): min=3938, max=23017, avg=6934.33, stdev=1751.00 00:35:24.250 clat (usec): min=341, max=41283, avg=14373.94, stdev=19237.43 00:35:24.250 lat (usec): min=347, max=41292, avg=14380.87, stdev=19237.49 00:35:24.250 clat percentiles (usec): 00:35:24.250 | 1.00th=[ 351], 5.00th=[ 359], 10.00th=[ 359], 20.00th=[ 367], 00:35:24.250 | 30.00th=[ 371], 40.00th=[ 375], 50.00th=[ 383], 60.00th=[ 396], 00:35:24.250 | 70.00th=[40633], 80.00th=[40633], 90.00th=[41157], 95.00th=[41157], 00:35:24.250 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:35:24.250 | 99.99th=[41157] 00:35:24.250 bw ( KiB/s): min= 576, max= 1760, per=48.08%, avg=1112.00, stdev=248.50, samples=20 00:35:24.250 iops : min= 144, max= 440, avg=278.00, stdev=62.12, samples=20 00:35:24.250 lat (usec) : 500=64.94%, 750=0.29%, 1000=0.14% 00:35:24.250 lat (msec) : 50=34.63% 00:35:24.250 cpu : usr=95.70%, sys=3.80%, ctx=18, majf=0, minf=1634 00:35:24.250 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:24.250 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:24.250 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:24.250 issued rwts: total=2784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:24.250 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:24.250 filename1: (groupid=0, jobs=1): err= 0: pid=118308: Wed Nov 6 14:39:49 2024 00:35:24.250 read: IOPS=300, BW=1203KiB/s (1232kB/s)(11.8MiB/10039msec) 00:35:24.250 slat (nsec): min=3054, max=18073, avg=6702.41, stdev=1398.53 00:35:24.250 clat (usec): min=343, max=41980, avg=13276.46, stdev=18845.08 00:35:24.250 lat (usec): min=349, max=41987, avg=13283.16, stdev=18845.05 00:35:24.250 clat percentiles (usec): 00:35:24.250 | 1.00th=[ 351], 5.00th=[ 359], 10.00th=[ 359], 20.00th=[ 367], 00:35:24.250 | 30.00th=[ 371], 40.00th=[ 375], 50.00th=[ 379], 60.00th=[ 388], 00:35:24.250 | 70.00th=[40633], 80.00th=[40633], 90.00th=[41157], 95.00th=[41157], 00:35:24.250 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:35:24.250 | 99.99th=[42206] 00:35:24.250 bw ( KiB/s): min= 800, max= 1760, per=52.15%, avg=1206.40, stdev=227.01, samples=20 00:35:24.250 iops : min= 200, max= 440, avg=301.60, stdev=56.75, samples=20 00:35:24.250 lat (usec) : 500=67.81%, 750=0.13%, 1000=0.13% 00:35:24.250 lat (msec) : 50=31.92% 00:35:24.250 cpu : usr=95.72%, sys=3.78%, ctx=29, majf=0, minf=1636 00:35:24.250 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:24.250 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:24.250 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:24.250 issued rwts: total=3020,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:24.250 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:24.250 00:35:24.250 Run status group 0 (all jobs): 00:35:24.250 READ: bw=2313KiB/s (2368kB/s), 1111KiB/s-1203KiB/s (1138kB/s-1232kB/s), io=22.7MiB (23.8MB), run=10019-10039msec 00:35:24.250 ----------------------------------------------------- 00:35:24.250 Suppressions used: 00:35:24.250 count bytes template 00:35:24.250 2 16 /usr/src/fio/parse.c 00:35:24.250 1 8 libtcmalloc_minimal.so 00:35:24.250 1 904 libcrypto.so 00:35:24.250 ----------------------------------------------------- 00:35:24.250 00:35:24.250 14:39:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:35:24.250 14:39:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:35:24.250 14:39:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:24.250 14:39:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:24.250 14:39:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:35:24.250 14:39:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:24.250 14:39:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:24.250 14:39:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:24.250 14:39:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:24.250 14:39:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:24.250 14:39:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:24.250 14:39:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:24.250 14:39:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:24.250 14:39:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:24.250 14:39:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:24.250 14:39:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:35:24.250 14:39:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:24.250 14:39:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:24.250 14:39:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:24.250 14:39:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:24.250 14:39:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:24.250 14:39:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:24.250 14:39:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:24.250 14:39:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:24.250 00:35:24.250 real 0m12.100s 00:35:24.250 user 0m20.842s 00:35:24.250 sys 0m1.047s 00:35:24.250 14:39:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:24.250 ************************************ 00:35:24.250 END TEST fio_dif_1_multi_subsystems 00:35:24.250 ************************************ 00:35:24.250 14:39:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:24.250 14:39:50 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:35:24.250 14:39:50 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:35:24.250 14:39:50 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:24.250 14:39:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:24.250 ************************************ 00:35:24.250 START TEST fio_dif_rand_params 00:35:24.250 ************************************ 00:35:24.250 14:39:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1127 -- # fio_dif_rand_params 00:35:24.250 14:39:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:35:24.250 14:39:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:35:24.250 14:39:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:35:24.250 14:39:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:35:24.250 14:39:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:35:24.250 14:39:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:35:24.250 14:39:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:35:24.250 14:39:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:35:24.250 14:39:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:24.250 14:39:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:24.251 14:39:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:24.251 14:39:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:24.251 14:39:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:24.251 14:39:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:24.251 14:39:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:24.251 bdev_null0 00:35:24.251 14:39:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:24.251 14:39:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:24.251 14:39:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:24.251 14:39:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:24.251 14:39:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:24.251 14:39:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:24.251 14:39:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:24.251 14:39:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:24.251 14:39:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:24.251 14:39:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:35:24.251 14:39:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:24.251 14:39:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:24.251 [2024-11-06 14:39:51.027182] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:35:24.251 14:39:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:24.251 14:39:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:35:24.251 14:39:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:35:24.251 14:39:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:24.251 14:39:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:24.251 14:39:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:24.251 14:39:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:35:24.251 14:39:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:35:24.251 14:39:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:35:24.251 14:39:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:24.251 14:39:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:24.251 14:39:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:24.251 { 00:35:24.251 "params": { 00:35:24.251 "name": "Nvme$subsystem", 00:35:24.251 "trtype": "$TEST_TRANSPORT", 00:35:24.251 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:24.251 "adrfam": "ipv4", 00:35:24.251 "trsvcid": "$NVMF_PORT", 00:35:24.251 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:24.251 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:24.251 "hdgst": ${hdgst:-false}, 00:35:24.251 "ddgst": ${ddgst:-false} 00:35:24.251 }, 00:35:24.251 "method": "bdev_nvme_attach_controller" 00:35:24.251 } 00:35:24.251 EOF 00:35:24.251 )") 00:35:24.251 14:39:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:35:24.251 14:39:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:35:24.251 14:39:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:24.251 14:39:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:35:24.251 14:39:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:35:24.251 14:39:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:24.251 14:39:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:35:24.251 14:39:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:24.251 14:39:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:24.251 14:39:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:35:24.251 14:39:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:35:24.251 14:39:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:35:24.251 14:39:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:24.251 14:39:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:24.251 14:39:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:35:24.251 14:39:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:35:24.251 14:39:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:24.251 "params": { 00:35:24.251 "name": "Nvme0", 00:35:24.251 "trtype": "tcp", 00:35:24.251 "traddr": "10.0.0.3", 00:35:24.251 "adrfam": "ipv4", 00:35:24.251 "trsvcid": "4420", 00:35:24.251 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:24.251 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:24.251 "hdgst": false, 00:35:24.251 "ddgst": false 00:35:24.251 }, 00:35:24.251 "method": "bdev_nvme_attach_controller" 00:35:24.251 }' 00:35:24.251 14:39:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:35:24.251 14:39:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:35:24.251 14:39:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # break 00:35:24.251 14:39:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:35:24.251 14:39:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:24.251 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:24.251 ... 00:35:24.251 fio-3.35 00:35:24.251 Starting 3 threads 00:35:29.518 00:35:29.518 filename0: (groupid=0, jobs=1): err= 0: pid=118467: Wed Nov 6 14:39:56 2024 00:35:29.518 read: IOPS=304, BW=38.0MiB/s (39.9MB/s)(190MiB/5002msec) 00:35:29.518 slat (nsec): min=4124, max=89197, avg=10956.70, stdev=3377.15 00:35:29.518 clat (usec): min=3338, max=51295, avg=9849.18, stdev=8543.40 00:35:29.518 lat (usec): min=3345, max=51314, avg=9860.13, stdev=8543.27 00:35:29.518 clat percentiles (usec): 00:35:29.518 | 1.00th=[ 4752], 5.00th=[ 5342], 10.00th=[ 5932], 20.00th=[ 6456], 00:35:29.518 | 30.00th=[ 7635], 40.00th=[ 8160], 50.00th=[ 8455], 60.00th=[ 8717], 00:35:29.518 | 70.00th=[ 8979], 80.00th=[ 9372], 90.00th=[ 9634], 95.00th=[10552], 00:35:29.518 | 99.00th=[50070], 99.50th=[50070], 99.90th=[50594], 99.95th=[51119], 00:35:29.518 | 99.99th=[51119] 00:35:29.518 bw ( KiB/s): min=29952, max=46592, per=34.55%, avg=38741.33, stdev=4682.07, samples=9 00:35:29.518 iops : min= 234, max= 364, avg=302.67, stdev=36.58, samples=9 00:35:29.518 lat (msec) : 4=0.39%, 10=93.23%, 20=1.84%, 50=3.62%, 100=0.92% 00:35:29.518 cpu : usr=94.76%, sys=4.04%, ctx=12, majf=0, minf=1636 00:35:29.518 IO depths : 1=1.8%, 2=98.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:29.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:29.518 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:29.518 issued rwts: total=1521,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:29.518 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:29.518 filename0: (groupid=0, jobs=1): err= 0: pid=118468: Wed Nov 6 14:39:56 2024 00:35:29.519 read: IOPS=259, BW=32.4MiB/s (34.0MB/s)(162MiB/5003msec) 00:35:29.519 slat (nsec): min=4174, max=23225, avg=9201.65, stdev=2431.41 00:35:29.519 clat (usec): min=3447, max=52387, avg=11564.65, stdev=10115.94 00:35:29.519 lat (usec): min=3459, max=52393, avg=11573.85, stdev=10115.98 00:35:29.519 clat percentiles (usec): 00:35:29.519 | 1.00th=[ 3523], 5.00th=[ 5145], 10.00th=[ 6194], 20.00th=[ 7373], 00:35:29.519 | 30.00th=[ 8848], 40.00th=[ 9372], 50.00th=[ 9765], 60.00th=[10028], 00:35:29.519 | 70.00th=[10290], 80.00th=[10552], 90.00th=[11076], 95.00th=[47973], 00:35:29.519 | 99.00th=[51119], 99.50th=[51643], 99.90th=[51643], 99.95th=[52167], 00:35:29.519 | 99.99th=[52167] 00:35:29.519 bw ( KiB/s): min=27904, max=42240, per=30.21%, avg=33877.33, stdev=5348.50, samples=9 00:35:29.519 iops : min= 218, max= 330, avg=264.67, stdev=41.79, samples=9 00:35:29.519 lat (msec) : 4=3.47%, 10=55.25%, 20=34.80%, 50=4.01%, 100=2.47% 00:35:29.519 cpu : usr=94.56%, sys=4.28%, ctx=9, majf=0, minf=1634 00:35:29.519 IO depths : 1=6.6%, 2=93.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:29.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:29.519 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:29.519 issued rwts: total=1296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:29.519 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:29.519 filename0: (groupid=0, jobs=1): err= 0: pid=118469: Wed Nov 6 14:39:56 2024 00:35:29.519 read: IOPS=313, BW=39.1MiB/s (41.0MB/s)(196MiB/5001msec) 00:35:29.519 slat (nsec): min=4196, max=26049, avg=9035.34, stdev=2783.61 00:35:29.519 clat (usec): min=3455, max=53708, avg=9557.32, stdev=5115.29 00:35:29.519 lat (usec): min=3462, max=53714, avg=9566.35, stdev=5115.45 00:35:29.519 clat percentiles (usec): 00:35:29.519 | 1.00th=[ 3490], 5.00th=[ 3490], 10.00th=[ 3523], 20.00th=[ 6783], 00:35:29.519 | 30.00th=[ 7439], 40.00th=[ 8029], 50.00th=[10159], 60.00th=[11600], 00:35:29.519 | 70.00th=[11994], 80.00th=[12387], 90.00th=[12780], 95.00th=[13173], 00:35:29.519 | 99.00th=[15139], 99.50th=[46924], 99.90th=[53216], 99.95th=[53740], 00:35:29.519 | 99.99th=[53740] 00:35:29.519 bw ( KiB/s): min=31551, max=54528, per=35.09%, avg=39345.67, stdev=6716.62, samples=9 00:35:29.519 iops : min= 246, max= 426, avg=307.33, stdev=52.55, samples=9 00:35:29.519 lat (msec) : 4=16.67%, 10=33.01%, 20=49.36%, 50=0.57%, 100=0.38% 00:35:29.519 cpu : usr=93.86%, sys=4.74%, ctx=11, majf=0, minf=1636 00:35:29.519 IO depths : 1=31.4%, 2=68.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:29.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:29.519 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:29.519 issued rwts: total=1566,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:29.519 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:29.519 00:35:29.519 Run status group 0 (all jobs): 00:35:29.519 READ: bw=110MiB/s (115MB/s), 32.4MiB/s-39.1MiB/s (34.0MB/s-41.0MB/s), io=548MiB (574MB), run=5001-5003msec 00:35:30.464 ----------------------------------------------------- 00:35:30.464 Suppressions used: 00:35:30.464 count bytes template 00:35:30.464 5 44 /usr/src/fio/parse.c 00:35:30.464 1 8 libtcmalloc_minimal.so 00:35:30.464 1 904 libcrypto.so 00:35:30.464 ----------------------------------------------------- 00:35:30.464 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:30.464 bdev_null0 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:30.464 [2024-11-06 14:39:57.845358] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:30.464 bdev_null1 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:30.464 bdev_null2 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:30.464 14:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:30.465 14:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:35:30.465 14:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:35:30.465 14:39:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:30.465 { 00:35:30.465 "params": { 00:35:30.465 "name": "Nvme$subsystem", 00:35:30.465 "trtype": "$TEST_TRANSPORT", 00:35:30.465 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:30.465 "adrfam": "ipv4", 00:35:30.465 "trsvcid": "$NVMF_PORT", 00:35:30.465 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:30.465 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:30.465 "hdgst": ${hdgst:-false}, 00:35:30.465 "ddgst": ${ddgst:-false} 00:35:30.465 }, 00:35:30.465 "method": "bdev_nvme_attach_controller" 00:35:30.465 } 00:35:30.465 EOF 00:35:30.465 )") 00:35:30.465 14:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:30.465 14:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:30.465 14:39:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:30.465 14:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:35:30.465 14:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:35:30.465 14:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:35:30.465 14:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:30.465 14:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:30.465 14:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:30.465 14:39:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:30.465 14:39:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:30.465 { 00:35:30.465 "params": { 00:35:30.465 "name": "Nvme$subsystem", 00:35:30.465 "trtype": "$TEST_TRANSPORT", 00:35:30.465 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:30.465 "adrfam": "ipv4", 00:35:30.465 "trsvcid": "$NVMF_PORT", 00:35:30.465 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:30.465 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:30.465 "hdgst": ${hdgst:-false}, 00:35:30.465 "ddgst": ${ddgst:-false} 00:35:30.465 }, 00:35:30.465 "method": "bdev_nvme_attach_controller" 00:35:30.465 } 00:35:30.465 EOF 00:35:30.465 )") 00:35:30.465 14:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:30.465 14:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:30.465 14:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:30.465 14:39:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:30.465 14:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:30.465 14:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:30.465 14:39:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:30.465 14:39:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:30.465 { 00:35:30.465 "params": { 00:35:30.465 "name": "Nvme$subsystem", 00:35:30.465 "trtype": "$TEST_TRANSPORT", 00:35:30.465 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:30.465 "adrfam": "ipv4", 00:35:30.465 "trsvcid": "$NVMF_PORT", 00:35:30.465 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:30.465 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:30.465 "hdgst": ${hdgst:-false}, 00:35:30.465 "ddgst": ${ddgst:-false} 00:35:30.465 }, 00:35:30.465 "method": "bdev_nvme_attach_controller" 00:35:30.465 } 00:35:30.465 EOF 00:35:30.465 )") 00:35:30.465 14:39:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:30.465 14:39:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:35:30.465 14:39:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:35:30.465 14:39:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:30.465 "params": { 00:35:30.465 "name": "Nvme0", 00:35:30.465 "trtype": "tcp", 00:35:30.465 "traddr": "10.0.0.3", 00:35:30.465 "adrfam": "ipv4", 00:35:30.465 "trsvcid": "4420", 00:35:30.465 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:30.465 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:30.465 "hdgst": false, 00:35:30.465 "ddgst": false 00:35:30.465 }, 00:35:30.465 "method": "bdev_nvme_attach_controller" 00:35:30.465 },{ 00:35:30.465 "params": { 00:35:30.465 "name": "Nvme1", 00:35:30.465 "trtype": "tcp", 00:35:30.465 "traddr": "10.0.0.3", 00:35:30.465 "adrfam": "ipv4", 00:35:30.465 "trsvcid": "4420", 00:35:30.465 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:30.465 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:30.465 "hdgst": false, 00:35:30.465 "ddgst": false 00:35:30.465 }, 00:35:30.465 "method": "bdev_nvme_attach_controller" 00:35:30.465 },{ 00:35:30.465 "params": { 00:35:30.465 "name": "Nvme2", 00:35:30.465 "trtype": "tcp", 00:35:30.465 "traddr": "10.0.0.3", 00:35:30.465 "adrfam": "ipv4", 00:35:30.465 "trsvcid": "4420", 00:35:30.465 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:35:30.465 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:35:30.465 "hdgst": false, 00:35:30.465 "ddgst": false 00:35:30.465 }, 00:35:30.465 "method": "bdev_nvme_attach_controller" 00:35:30.465 }' 00:35:30.465 14:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:35:30.465 14:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:35:30.465 14:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # break 00:35:30.465 14:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:35:30.465 14:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:30.726 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:30.726 ... 00:35:30.726 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:30.726 ... 00:35:30.726 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:30.726 ... 00:35:30.726 fio-3.35 00:35:30.726 Starting 24 threads 00:35:42.939 00:35:42.939 filename0: (groupid=0, jobs=1): err= 0: pid=118565: Wed Nov 6 14:40:09 2024 00:35:42.939 read: IOPS=307, BW=1229KiB/s (1258kB/s)(12.1MiB/10059msec) 00:35:42.939 slat (usec): min=3, max=8022, avg=22.45, stdev=284.94 00:35:42.939 clat (usec): min=1566, max=133300, avg=51876.73, stdev=22397.61 00:35:42.939 lat (usec): min=1574, max=133307, avg=51899.18, stdev=22401.00 00:35:42.939 clat percentiles (msec): 00:35:42.939 | 1.00th=[ 4], 5.00th=[ 9], 10.00th=[ 17], 20.00th=[ 36], 00:35:42.939 | 30.00th=[ 41], 40.00th=[ 47], 50.00th=[ 53], 60.00th=[ 60], 00:35:42.939 | 70.00th=[ 64], 80.00th=[ 72], 90.00th=[ 79], 95.00th=[ 88], 00:35:42.939 | 99.00th=[ 108], 99.50th=[ 111], 99.90th=[ 134], 99.95th=[ 134], 00:35:42.939 | 99.99th=[ 134] 00:35:42.939 bw ( KiB/s): min= 840, max= 3704, per=4.52%, avg=1229.45, stdev=602.02, samples=20 00:35:42.939 iops : min= 210, max= 926, avg=307.35, stdev=150.51, samples=20 00:35:42.939 lat (msec) : 2=0.52%, 4=0.61%, 10=4.50%, 20=4.76%, 50=35.28% 00:35:42.939 lat (msec) : 100=52.20%, 250=2.14% 00:35:42.939 cpu : usr=38.42%, sys=0.46%, ctx=1108, majf=0, minf=1636 00:35:42.939 IO depths : 1=1.6%, 2=3.5%, 4=11.4%, 8=71.7%, 16=11.7%, 32=0.0%, >=64=0.0% 00:35:42.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.939 complete : 0=0.0%, 4=90.3%, 8=4.8%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.939 issued rwts: total=3090,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:42.939 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:42.939 filename0: (groupid=0, jobs=1): err= 0: pid=118566: Wed Nov 6 14:40:09 2024 00:35:42.939 read: IOPS=266, BW=1066KiB/s (1092kB/s)(10.5MiB/10038msec) 00:35:42.939 slat (usec): min=3, max=8023, avg=19.66, stdev=279.14 00:35:42.939 clat (msec): min=20, max=123, avg=59.86, stdev=19.52 00:35:42.939 lat (msec): min=20, max=123, avg=59.88, stdev=19.51 00:35:42.939 clat percentiles (msec): 00:35:42.939 | 1.00th=[ 24], 5.00th=[ 27], 10.00th=[ 35], 20.00th=[ 45], 00:35:42.939 | 30.00th=[ 48], 40.00th=[ 58], 50.00th=[ 61], 60.00th=[ 62], 00:35:42.939 | 70.00th=[ 72], 80.00th=[ 72], 90.00th=[ 85], 95.00th=[ 96], 00:35:42.939 | 99.00th=[ 112], 99.50th=[ 121], 99.90th=[ 124], 99.95th=[ 124], 00:35:42.939 | 99.99th=[ 124] 00:35:42.939 bw ( KiB/s): min= 763, max= 1712, per=3.92%, avg=1066.15, stdev=226.45, samples=20 00:35:42.939 iops : min= 190, max= 428, avg=266.40, stdev=56.74, samples=20 00:35:42.939 lat (msec) : 50=31.99%, 100=66.22%, 250=1.79% 00:35:42.939 cpu : usr=32.81%, sys=0.36%, ctx=872, majf=0, minf=1635 00:35:42.939 IO depths : 1=1.6%, 2=3.4%, 4=11.4%, 8=71.9%, 16=11.8%, 32=0.0%, >=64=0.0% 00:35:42.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.939 complete : 0=0.0%, 4=90.2%, 8=5.0%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.939 issued rwts: total=2676,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:42.939 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:42.939 filename0: (groupid=0, jobs=1): err= 0: pid=118567: Wed Nov 6 14:40:09 2024 00:35:42.939 read: IOPS=317, BW=1268KiB/s (1299kB/s)(12.5MiB/10088msec) 00:35:42.939 slat (usec): min=3, max=4029, avg=10.75, stdev=71.23 00:35:42.939 clat (msec): min=6, max=115, avg=50.25, stdev=19.74 00:35:42.939 lat (msec): min=6, max=115, avg=50.26, stdev=19.73 00:35:42.939 clat percentiles (msec): 00:35:42.939 | 1.00th=[ 8], 5.00th=[ 16], 10.00th=[ 26], 20.00th=[ 35], 00:35:42.939 | 30.00th=[ 39], 40.00th=[ 46], 50.00th=[ 48], 60.00th=[ 56], 00:35:42.939 | 70.00th=[ 61], 80.00th=[ 66], 90.00th=[ 74], 95.00th=[ 85], 00:35:42.939 | 99.00th=[ 99], 99.50th=[ 108], 99.90th=[ 115], 99.95th=[ 115], 00:35:42.939 | 99.99th=[ 115] 00:35:42.939 bw ( KiB/s): min= 856, max= 3169, per=4.67%, avg=1272.85, stdev=471.65, samples=20 00:35:42.939 iops : min= 214, max= 792, avg=318.20, stdev=117.86, samples=20 00:35:42.939 lat (msec) : 10=1.50%, 20=6.25%, 50=46.51%, 100=45.17%, 250=0.56% 00:35:42.939 cpu : usr=38.81%, sys=0.46%, ctx=1053, majf=0, minf=1636 00:35:42.939 IO depths : 1=0.9%, 2=2.3%, 4=10.5%, 8=73.3%, 16=12.9%, 32=0.0%, >=64=0.0% 00:35:42.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.939 complete : 0=0.0%, 4=90.4%, 8=5.3%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.939 issued rwts: total=3199,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:42.939 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:42.939 filename0: (groupid=0, jobs=1): err= 0: pid=118568: Wed Nov 6 14:40:09 2024 00:35:42.939 read: IOPS=256, BW=1028KiB/s (1053kB/s)(10.1MiB/10020msec) 00:35:42.939 slat (usec): min=3, max=8016, avg=17.10, stdev=208.82 00:35:42.939 clat (msec): min=15, max=119, avg=62.16, stdev=17.90 00:35:42.939 lat (msec): min=15, max=119, avg=62.18, stdev=17.90 00:35:42.939 clat percentiles (msec): 00:35:42.939 | 1.00th=[ 23], 5.00th=[ 31], 10.00th=[ 41], 20.00th=[ 51], 00:35:42.939 | 30.00th=[ 56], 40.00th=[ 57], 50.00th=[ 60], 60.00th=[ 64], 00:35:42.939 | 70.00th=[ 70], 80.00th=[ 77], 90.00th=[ 87], 95.00th=[ 95], 00:35:42.939 | 99.00th=[ 107], 99.50th=[ 111], 99.90th=[ 117], 99.95th=[ 120], 00:35:42.939 | 99.99th=[ 121] 00:35:42.939 bw ( KiB/s): min= 848, max= 1712, per=3.76%, avg=1023.45, stdev=176.74, samples=20 00:35:42.939 iops : min= 212, max= 428, avg=255.85, stdev=44.18, samples=20 00:35:42.939 lat (msec) : 20=0.62%, 50=19.07%, 100=78.14%, 250=2.17% 00:35:42.939 cpu : usr=43.43%, sys=0.60%, ctx=1479, majf=0, minf=1635 00:35:42.939 IO depths : 1=2.5%, 2=5.7%, 4=16.1%, 8=65.2%, 16=10.6%, 32=0.0%, >=64=0.0% 00:35:42.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.939 complete : 0=0.0%, 4=91.5%, 8=3.4%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.939 issued rwts: total=2575,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:42.939 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:42.939 filename0: (groupid=0, jobs=1): err= 0: pid=118569: Wed Nov 6 14:40:09 2024 00:35:42.939 read: IOPS=289, BW=1159KiB/s (1187kB/s)(11.4MiB/10065msec) 00:35:42.939 slat (usec): min=3, max=8026, avg=14.51, stdev=181.78 00:35:42.939 clat (msec): min=13, max=118, avg=55.03, stdev=18.41 00:35:42.939 lat (msec): min=13, max=118, avg=55.05, stdev=18.41 00:35:42.939 clat percentiles (msec): 00:35:42.939 | 1.00th=[ 20], 5.00th=[ 26], 10.00th=[ 34], 20.00th=[ 38], 00:35:42.939 | 30.00th=[ 46], 40.00th=[ 48], 50.00th=[ 55], 60.00th=[ 61], 00:35:42.939 | 70.00th=[ 64], 80.00th=[ 72], 90.00th=[ 81], 95.00th=[ 86], 00:35:42.939 | 99.00th=[ 106], 99.50th=[ 113], 99.90th=[ 120], 99.95th=[ 120], 00:35:42.939 | 99.99th=[ 120] 00:35:42.939 bw ( KiB/s): min= 896, max= 1888, per=4.26%, avg=1160.40, stdev=227.17, samples=20 00:35:42.939 iops : min= 224, max= 472, avg=290.10, stdev=56.79, samples=20 00:35:42.939 lat (msec) : 20=1.13%, 50=43.95%, 100=53.41%, 250=1.51% 00:35:42.939 cpu : usr=38.62%, sys=0.36%, ctx=1201, majf=0, minf=1634 00:35:42.939 IO depths : 1=0.4%, 2=1.0%, 4=6.8%, 8=77.9%, 16=14.0%, 32=0.0%, >=64=0.0% 00:35:42.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.939 complete : 0=0.0%, 4=89.3%, 8=6.9%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.939 issued rwts: total=2917,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:42.939 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:42.939 filename0: (groupid=0, jobs=1): err= 0: pid=118570: Wed Nov 6 14:40:09 2024 00:35:42.939 read: IOPS=260, BW=1040KiB/s (1065kB/s)(10.2MiB/10012msec) 00:35:42.939 slat (usec): min=3, max=3492, avg=12.12, stdev=95.48 00:35:42.939 clat (msec): min=12, max=131, avg=61.43, stdev=18.52 00:35:42.939 lat (msec): min=12, max=131, avg=61.44, stdev=18.52 00:35:42.939 clat percentiles (msec): 00:35:42.939 | 1.00th=[ 18], 5.00th=[ 33], 10.00th=[ 38], 20.00th=[ 48], 00:35:42.939 | 30.00th=[ 55], 40.00th=[ 57], 50.00th=[ 60], 60.00th=[ 64], 00:35:42.939 | 70.00th=[ 70], 80.00th=[ 79], 90.00th=[ 86], 95.00th=[ 92], 00:35:42.939 | 99.00th=[ 108], 99.50th=[ 126], 99.90th=[ 132], 99.95th=[ 132], 00:35:42.939 | 99.99th=[ 132] 00:35:42.939 bw ( KiB/s): min= 808, max= 1640, per=3.83%, avg=1042.37, stdev=180.97, samples=19 00:35:42.939 iops : min= 202, max= 410, avg=260.58, stdev=45.23, samples=19 00:35:42.939 lat (msec) : 20=1.61%, 50=20.78%, 100=76.04%, 250=1.57% 00:35:42.939 cpu : usr=42.97%, sys=0.57%, ctx=1280, majf=0, minf=1635 00:35:42.939 IO depths : 1=2.9%, 2=6.6%, 4=17.3%, 8=63.3%, 16=9.9%, 32=0.0%, >=64=0.0% 00:35:42.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.939 complete : 0=0.0%, 4=91.9%, 8=2.8%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.939 issued rwts: total=2604,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:42.939 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:42.939 filename0: (groupid=0, jobs=1): err= 0: pid=118571: Wed Nov 6 14:40:09 2024 00:35:42.939 read: IOPS=275, BW=1100KiB/s (1127kB/s)(10.8MiB/10008msec) 00:35:42.939 slat (usec): min=3, max=8023, avg=15.20, stdev=215.96 00:35:42.939 clat (msec): min=3, max=131, avg=58.07, stdev=19.01 00:35:42.939 lat (msec): min=3, max=131, avg=58.09, stdev=19.00 00:35:42.939 clat percentiles (msec): 00:35:42.939 | 1.00th=[ 7], 5.00th=[ 34], 10.00th=[ 36], 20.00th=[ 47], 00:35:42.939 | 30.00th=[ 48], 40.00th=[ 52], 50.00th=[ 59], 60.00th=[ 61], 00:35:42.940 | 70.00th=[ 63], 80.00th=[ 72], 90.00th=[ 84], 95.00th=[ 94], 00:35:42.940 | 99.00th=[ 108], 99.50th=[ 109], 99.90th=[ 132], 99.95th=[ 132], 00:35:42.940 | 99.99th=[ 132] 00:35:42.940 bw ( KiB/s): min= 824, max= 1280, per=3.93%, avg=1071.58, stdev=125.63, samples=19 00:35:42.940 iops : min= 206, max= 320, avg=267.89, stdev=31.41, samples=19 00:35:42.940 lat (msec) : 4=0.36%, 10=1.38%, 20=0.58%, 50=34.44%, 100=60.48% 00:35:42.940 lat (msec) : 250=2.76% 00:35:42.940 cpu : usr=32.76%, sys=0.37%, ctx=875, majf=0, minf=1633 00:35:42.940 IO depths : 1=0.9%, 2=2.2%, 4=9.4%, 8=74.7%, 16=12.8%, 32=0.0%, >=64=0.0% 00:35:42.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.940 complete : 0=0.0%, 4=89.9%, 8=5.7%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.940 issued rwts: total=2753,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:42.940 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:42.940 filename0: (groupid=0, jobs=1): err= 0: pid=118572: Wed Nov 6 14:40:09 2024 00:35:42.940 read: IOPS=261, BW=1045KiB/s (1070kB/s)(10.2MiB/10003msec) 00:35:42.940 slat (usec): min=3, max=8023, avg=15.73, stdev=221.66 00:35:42.940 clat (msec): min=6, max=128, avg=61.16, stdev=20.38 00:35:42.940 lat (msec): min=6, max=128, avg=61.18, stdev=20.37 00:35:42.940 clat percentiles (msec): 00:35:42.940 | 1.00th=[ 10], 5.00th=[ 33], 10.00th=[ 36], 20.00th=[ 48], 00:35:42.940 | 30.00th=[ 49], 40.00th=[ 58], 50.00th=[ 61], 60.00th=[ 63], 00:35:42.940 | 70.00th=[ 72], 80.00th=[ 72], 90.00th=[ 86], 95.00th=[ 96], 00:35:42.940 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 129], 99.95th=[ 129], 00:35:42.940 | 99.99th=[ 129] 00:35:42.940 bw ( KiB/s): min= 856, max= 1488, per=3.82%, avg=1040.00, stdev=145.45, samples=19 00:35:42.940 iops : min= 214, max= 372, avg=260.00, stdev=36.36, samples=19 00:35:42.940 lat (msec) : 10=1.42%, 20=0.42%, 50=30.99%, 100=63.16%, 250=4.02% 00:35:42.940 cpu : usr=32.67%, sys=0.43%, ctx=883, majf=0, minf=1635 00:35:42.940 IO depths : 1=0.9%, 2=2.5%, 4=9.9%, 8=73.7%, 16=13.0%, 32=0.0%, >=64=0.0% 00:35:42.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.940 complete : 0=0.0%, 4=90.3%, 8=5.4%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.940 issued rwts: total=2614,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:42.940 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:42.940 filename1: (groupid=0, jobs=1): err= 0: pid=118573: Wed Nov 6 14:40:09 2024 00:35:42.940 read: IOPS=270, BW=1081KiB/s (1107kB/s)(10.6MiB/10034msec) 00:35:42.940 slat (usec): min=3, max=4025, avg=11.81, stdev=84.64 00:35:42.940 clat (msec): min=15, max=127, avg=59.13, stdev=18.83 00:35:42.940 lat (msec): min=15, max=127, avg=59.14, stdev=18.84 00:35:42.940 clat percentiles (msec): 00:35:42.940 | 1.00th=[ 17], 5.00th=[ 30], 10.00th=[ 35], 20.00th=[ 44], 00:35:42.940 | 30.00th=[ 51], 40.00th=[ 56], 50.00th=[ 58], 60.00th=[ 63], 00:35:42.940 | 70.00th=[ 68], 80.00th=[ 73], 90.00th=[ 84], 95.00th=[ 90], 00:35:42.940 | 99.00th=[ 107], 99.50th=[ 121], 99.90th=[ 128], 99.95th=[ 128], 00:35:42.940 | 99.99th=[ 128] 00:35:42.940 bw ( KiB/s): min= 856, max= 1816, per=3.95%, avg=1076.05, stdev=225.15, samples=20 00:35:42.940 iops : min= 214, max= 454, avg=269.00, stdev=56.29, samples=20 00:35:42.940 lat (msec) : 20=1.62%, 50=28.14%, 100=68.06%, 250=2.18% 00:35:42.940 cpu : usr=43.43%, sys=0.58%, ctx=1334, majf=0, minf=1633 00:35:42.940 IO depths : 1=2.4%, 2=5.6%, 4=15.8%, 8=65.7%, 16=10.6%, 32=0.0%, >=64=0.0% 00:35:42.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.940 complete : 0=0.0%, 4=91.7%, 8=3.1%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.940 issued rwts: total=2711,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:42.940 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:42.940 filename1: (groupid=0, jobs=1): err= 0: pid=118574: Wed Nov 6 14:40:09 2024 00:35:42.940 read: IOPS=388, BW=1554KiB/s (1591kB/s)(15.3MiB/10058msec) 00:35:42.940 slat (usec): min=2, max=4019, avg=10.16, stdev=90.76 00:35:42.940 clat (usec): min=1129, max=133887, avg=41034.37, stdev=21677.20 00:35:42.940 lat (usec): min=1135, max=133894, avg=41044.53, stdev=21678.19 00:35:42.940 clat percentiles (usec): 00:35:42.940 | 1.00th=[ 1270], 5.00th=[ 2343], 10.00th=[ 5604], 20.00th=[ 25560], 00:35:42.940 | 30.00th=[ 33817], 40.00th=[ 38011], 50.00th=[ 40109], 60.00th=[ 45351], 00:35:42.940 | 70.00th=[ 51643], 80.00th=[ 56361], 90.00th=[ 67634], 95.00th=[ 78119], 00:35:42.940 | 99.00th=[ 95945], 99.50th=[106431], 99.90th=[133694], 99.95th=[133694], 00:35:42.940 | 99.99th=[133694] 00:35:42.940 bw ( KiB/s): min= 728, max= 6168, per=5.73%, avg=1559.20, stdev=1100.17, samples=20 00:35:42.940 iops : min= 182, max= 1542, avg=389.80, stdev=275.04, samples=20 00:35:42.940 lat (msec) : 2=4.68%, 4=2.87%, 10=6.06%, 20=3.02%, 50=51.84% 00:35:42.940 lat (msec) : 100=30.73%, 250=0.79% 00:35:42.940 cpu : usr=45.95%, sys=0.68%, ctx=1387, majf=0, minf=1636 00:35:42.940 IO depths : 1=1.2%, 2=2.6%, 4=10.3%, 8=73.9%, 16=12.0%, 32=0.0%, >=64=0.0% 00:35:42.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.940 complete : 0=0.0%, 4=90.0%, 8=5.1%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.940 issued rwts: total=3908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:42.940 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:42.940 filename1: (groupid=0, jobs=1): err= 0: pid=118575: Wed Nov 6 14:40:09 2024 00:35:42.940 read: IOPS=300, BW=1201KiB/s (1230kB/s)(11.8MiB/10049msec) 00:35:42.940 slat (usec): min=3, max=4022, avg=12.35, stdev=99.02 00:35:42.940 clat (msec): min=10, max=130, avg=53.15, stdev=19.86 00:35:42.940 lat (msec): min=10, max=130, avg=53.17, stdev=19.86 00:35:42.940 clat percentiles (msec): 00:35:42.940 | 1.00th=[ 14], 5.00th=[ 19], 10.00th=[ 31], 20.00th=[ 36], 00:35:42.940 | 30.00th=[ 41], 40.00th=[ 48], 50.00th=[ 54], 60.00th=[ 58], 00:35:42.940 | 70.00th=[ 63], 80.00th=[ 71], 90.00th=[ 81], 95.00th=[ 87], 00:35:42.940 | 99.00th=[ 109], 99.50th=[ 115], 99.90th=[ 131], 99.95th=[ 131], 00:35:42.940 | 99.99th=[ 131] 00:35:42.940 bw ( KiB/s): min= 816, max= 2637, per=4.41%, avg=1200.65, stdev=382.89, samples=20 00:35:42.940 iops : min= 204, max= 659, avg=300.15, stdev=95.67, samples=20 00:35:42.940 lat (msec) : 20=5.50%, 50=40.24%, 100=52.64%, 250=1.62% 00:35:42.940 cpu : usr=43.83%, sys=0.54%, ctx=1254, majf=0, minf=1635 00:35:42.940 IO depths : 1=1.5%, 2=3.2%, 4=10.8%, 8=72.7%, 16=11.8%, 32=0.0%, >=64=0.0% 00:35:42.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.940 complete : 0=0.0%, 4=90.2%, 8=5.0%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.940 issued rwts: total=3017,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:42.940 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:42.940 filename1: (groupid=0, jobs=1): err= 0: pid=118576: Wed Nov 6 14:40:09 2024 00:35:42.940 read: IOPS=276, BW=1104KiB/s (1131kB/s)(10.8MiB/10002msec) 00:35:42.940 slat (usec): min=3, max=4021, avg=11.11, stdev=76.50 00:35:42.940 clat (msec): min=4, max=123, avg=57.89, stdev=18.08 00:35:42.940 lat (msec): min=4, max=123, avg=57.90, stdev=18.08 00:35:42.940 clat percentiles (msec): 00:35:42.940 | 1.00th=[ 21], 5.00th=[ 32], 10.00th=[ 36], 20.00th=[ 43], 00:35:42.940 | 30.00th=[ 48], 40.00th=[ 54], 50.00th=[ 57], 60.00th=[ 62], 00:35:42.940 | 70.00th=[ 66], 80.00th=[ 72], 90.00th=[ 82], 95.00th=[ 91], 00:35:42.940 | 99.00th=[ 102], 99.50th=[ 110], 99.90th=[ 125], 99.95th=[ 125], 00:35:42.940 | 99.99th=[ 125] 00:35:42.940 bw ( KiB/s): min= 896, max= 1536, per=3.99%, avg=1087.89, stdev=153.38, samples=19 00:35:42.940 iops : min= 224, max= 384, avg=271.95, stdev=38.34, samples=19 00:35:42.940 lat (msec) : 10=0.29%, 20=0.69%, 50=32.34%, 100=65.30%, 250=1.38% 00:35:42.940 cpu : usr=38.11%, sys=0.37%, ctx=1088, majf=0, minf=1635 00:35:42.940 IO depths : 1=1.7%, 2=3.9%, 4=12.2%, 8=70.7%, 16=11.5%, 32=0.0%, >=64=0.0% 00:35:42.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.940 complete : 0=0.0%, 4=90.7%, 8=4.3%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.940 issued rwts: total=2761,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:42.940 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:42.940 filename1: (groupid=0, jobs=1): err= 0: pid=118577: Wed Nov 6 14:40:09 2024 00:35:42.940 read: IOPS=289, BW=1157KiB/s (1185kB/s)(11.4MiB/10074msec) 00:35:42.940 slat (usec): min=4, max=8022, avg=11.65, stdev=148.47 00:35:42.940 clat (msec): min=22, max=119, avg=55.19, stdev=18.23 00:35:42.940 lat (msec): min=22, max=119, avg=55.20, stdev=18.24 00:35:42.940 clat percentiles (msec): 00:35:42.940 | 1.00th=[ 24], 5.00th=[ 27], 10.00th=[ 36], 20.00th=[ 36], 00:35:42.940 | 30.00th=[ 46], 40.00th=[ 48], 50.00th=[ 57], 60.00th=[ 61], 00:35:42.940 | 70.00th=[ 61], 80.00th=[ 72], 90.00th=[ 81], 95.00th=[ 86], 00:35:42.940 | 99.00th=[ 108], 99.50th=[ 108], 99.90th=[ 121], 99.95th=[ 121], 00:35:42.940 | 99.99th=[ 121] 00:35:42.940 bw ( KiB/s): min= 896, max= 1840, per=4.26%, avg=1159.60, stdev=230.03, samples=20 00:35:42.940 iops : min= 224, max= 460, avg=289.90, stdev=57.51, samples=20 00:35:42.940 lat (msec) : 50=45.90%, 100=52.86%, 250=1.23% 00:35:42.940 cpu : usr=32.62%, sys=0.55%, ctx=881, majf=0, minf=1636 00:35:42.940 IO depths : 1=0.5%, 2=1.2%, 4=7.7%, 8=77.2%, 16=13.3%, 32=0.0%, >=64=0.0% 00:35:42.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.940 complete : 0=0.0%, 4=89.3%, 8=6.5%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.940 issued rwts: total=2915,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:42.940 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:42.940 filename1: (groupid=0, jobs=1): err= 0: pid=118578: Wed Nov 6 14:40:09 2024 00:35:42.940 read: IOPS=252, BW=1010KiB/s (1034kB/s)(9.87MiB/10006msec) 00:35:42.940 slat (usec): min=4, max=8017, avg=12.50, stdev=159.40 00:35:42.940 clat (msec): min=6, max=113, avg=63.28, stdev=17.95 00:35:42.940 lat (msec): min=6, max=113, avg=63.29, stdev=17.95 00:35:42.940 clat percentiles (msec): 00:35:42.940 | 1.00th=[ 10], 5.00th=[ 35], 10.00th=[ 44], 20.00th=[ 52], 00:35:42.940 | 30.00th=[ 56], 40.00th=[ 59], 50.00th=[ 61], 60.00th=[ 67], 00:35:42.940 | 70.00th=[ 72], 80.00th=[ 74], 90.00th=[ 85], 95.00th=[ 96], 00:35:42.940 | 99.00th=[ 108], 99.50th=[ 108], 99.90th=[ 114], 99.95th=[ 114], 00:35:42.940 | 99.99th=[ 114] 00:35:42.940 bw ( KiB/s): min= 792, max= 1410, per=3.66%, avg=995.47, stdev=129.06, samples=19 00:35:42.940 iops : min= 198, max= 352, avg=248.84, stdev=32.18, samples=19 00:35:42.940 lat (msec) : 10=1.27%, 20=0.63%, 50=16.07%, 100=78.98%, 250=3.05% 00:35:42.940 cpu : usr=40.27%, sys=0.49%, ctx=1165, majf=0, minf=1635 00:35:42.940 IO depths : 1=2.9%, 2=6.2%, 4=16.5%, 8=64.4%, 16=10.0%, 32=0.0%, >=64=0.0% 00:35:42.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.940 complete : 0=0.0%, 4=91.8%, 8=2.8%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.940 issued rwts: total=2526,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:42.941 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:42.941 filename1: (groupid=0, jobs=1): err= 0: pid=118579: Wed Nov 6 14:40:09 2024 00:35:42.941 read: IOPS=262, BW=1050KiB/s (1075kB/s)(10.3MiB/10006msec) 00:35:42.941 slat (usec): min=4, max=7049, avg=12.15, stdev=137.47 00:35:42.941 clat (msec): min=6, max=134, avg=60.88, stdev=18.87 00:35:42.941 lat (msec): min=6, max=134, avg=60.90, stdev=18.87 00:35:42.941 clat percentiles (msec): 00:35:42.941 | 1.00th=[ 10], 5.00th=[ 32], 10.00th=[ 41], 20.00th=[ 50], 00:35:42.941 | 30.00th=[ 55], 40.00th=[ 56], 50.00th=[ 58], 60.00th=[ 61], 00:35:42.941 | 70.00th=[ 66], 80.00th=[ 73], 90.00th=[ 85], 95.00th=[ 95], 00:35:42.941 | 99.00th=[ 117], 99.50th=[ 125], 99.90th=[ 125], 99.95th=[ 136], 00:35:42.941 | 99.99th=[ 136] 00:35:42.941 bw ( KiB/s): min= 768, max= 1539, per=3.81%, avg=1036.37, stdev=160.17, samples=19 00:35:42.941 iops : min= 192, max= 384, avg=259.05, stdev=39.91, samples=19 00:35:42.941 lat (msec) : 10=1.22%, 20=0.95%, 50=19.88%, 100=74.26%, 250=3.69% 00:35:42.941 cpu : usr=45.18%, sys=0.65%, ctx=1314, majf=0, minf=1635 00:35:42.941 IO depths : 1=3.1%, 2=6.7%, 4=17.8%, 8=62.5%, 16=9.8%, 32=0.0%, >=64=0.0% 00:35:42.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.941 complete : 0=0.0%, 4=92.0%, 8=2.7%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.941 issued rwts: total=2626,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:42.941 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:42.941 filename1: (groupid=0, jobs=1): err= 0: pid=118580: Wed Nov 6 14:40:09 2024 00:35:42.941 read: IOPS=274, BW=1097KiB/s (1123kB/s)(10.7MiB/10029msec) 00:35:42.941 slat (nsec): min=3843, max=57862, avg=9380.52, stdev=4341.03 00:35:42.941 clat (msec): min=20, max=120, avg=58.26, stdev=20.15 00:35:42.941 lat (msec): min=20, max=120, avg=58.27, stdev=20.14 00:35:42.941 clat percentiles (msec): 00:35:42.941 | 1.00th=[ 24], 5.00th=[ 33], 10.00th=[ 36], 20.00th=[ 39], 00:35:42.941 | 30.00th=[ 47], 40.00th=[ 50], 50.00th=[ 57], 60.00th=[ 61], 00:35:42.941 | 70.00th=[ 70], 80.00th=[ 73], 90.00th=[ 87], 95.00th=[ 96], 00:35:42.941 | 99.00th=[ 108], 99.50th=[ 109], 99.90th=[ 121], 99.95th=[ 121], 00:35:42.941 | 99.99th=[ 121] 00:35:42.941 bw ( KiB/s): min= 768, max= 1592, per=4.02%, avg=1094.55, stdev=210.41, samples=20 00:35:42.941 iops : min= 192, max= 398, avg=273.60, stdev=52.61, samples=20 00:35:42.941 lat (msec) : 50=44.22%, 100=52.62%, 250=3.16% 00:35:42.941 cpu : usr=32.58%, sys=0.46%, ctx=912, majf=0, minf=1633 00:35:42.941 IO depths : 1=1.3%, 2=3.2%, 4=11.2%, 8=72.1%, 16=12.2%, 32=0.0%, >=64=0.0% 00:35:42.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.941 complete : 0=0.0%, 4=90.3%, 8=5.1%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.941 issued rwts: total=2750,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:42.941 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:42.941 filename2: (groupid=0, jobs=1): err= 0: pid=118581: Wed Nov 6 14:40:09 2024 00:35:42.941 read: IOPS=344, BW=1378KiB/s (1411kB/s)(13.6MiB/10073msec) 00:35:42.941 slat (usec): min=3, max=8020, avg=12.96, stdev=152.13 00:35:42.941 clat (usec): min=1392, max=116009, avg=46302.79, stdev=19736.28 00:35:42.941 lat (usec): min=1400, max=116027, avg=46315.74, stdev=19731.00 00:35:42.941 clat percentiles (msec): 00:35:42.941 | 1.00th=[ 4], 5.00th=[ 9], 10.00th=[ 24], 20.00th=[ 34], 00:35:42.941 | 30.00th=[ 37], 40.00th=[ 41], 50.00th=[ 45], 60.00th=[ 49], 00:35:42.941 | 70.00th=[ 56], 80.00th=[ 61], 90.00th=[ 71], 95.00th=[ 81], 00:35:42.941 | 99.00th=[ 114], 99.50th=[ 116], 99.90th=[ 116], 99.95th=[ 116], 00:35:42.941 | 99.99th=[ 116] 00:35:42.941 bw ( KiB/s): min= 816, max= 3449, per=5.07%, avg=1380.95, stdev=520.19, samples=20 00:35:42.941 iops : min= 204, max= 862, avg=345.20, stdev=130.00, samples=20 00:35:42.941 lat (msec) : 2=0.14%, 4=1.12%, 10=4.44%, 20=2.88%, 50=53.04% 00:35:42.941 lat (msec) : 100=37.16%, 250=1.21% 00:35:42.941 cpu : usr=43.47%, sys=0.64%, ctx=1417, majf=0, minf=1634 00:35:42.941 IO depths : 1=0.5%, 2=1.2%, 4=7.6%, 8=78.0%, 16=12.7%, 32=0.0%, >=64=0.0% 00:35:42.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.941 complete : 0=0.0%, 4=89.3%, 8=5.8%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.941 issued rwts: total=3469,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:42.941 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:42.941 filename2: (groupid=0, jobs=1): err= 0: pid=118582: Wed Nov 6 14:40:09 2024 00:35:42.941 read: IOPS=276, BW=1106KiB/s (1132kB/s)(10.8MiB/10008msec) 00:35:42.941 slat (usec): min=4, max=8019, avg=15.15, stdev=215.38 00:35:42.941 clat (msec): min=4, max=133, avg=57.81, stdev=18.90 00:35:42.941 lat (msec): min=4, max=133, avg=57.82, stdev=18.90 00:35:42.941 clat percentiles (msec): 00:35:42.941 | 1.00th=[ 11], 5.00th=[ 33], 10.00th=[ 36], 20.00th=[ 42], 00:35:42.941 | 30.00th=[ 50], 40.00th=[ 54], 50.00th=[ 56], 60.00th=[ 61], 00:35:42.941 | 70.00th=[ 64], 80.00th=[ 72], 90.00th=[ 81], 95.00th=[ 90], 00:35:42.941 | 99.00th=[ 113], 99.50th=[ 115], 99.90th=[ 133], 99.95th=[ 134], 00:35:42.941 | 99.99th=[ 134] 00:35:42.941 bw ( KiB/s): min= 768, max= 1410, per=3.97%, avg=1081.37, stdev=168.77, samples=19 00:35:42.941 iops : min= 192, max= 352, avg=270.32, stdev=42.14, samples=19 00:35:42.941 lat (msec) : 10=0.80%, 20=1.16%, 50=30.80%, 100=63.38%, 250=3.87% 00:35:42.941 cpu : usr=44.60%, sys=0.49%, ctx=1495, majf=0, minf=1633 00:35:42.941 IO depths : 1=1.3%, 2=2.9%, 4=9.8%, 8=73.1%, 16=12.9%, 32=0.0%, >=64=0.0% 00:35:42.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.941 complete : 0=0.0%, 4=90.4%, 8=5.6%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.941 issued rwts: total=2766,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:42.941 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:42.941 filename2: (groupid=0, jobs=1): err= 0: pid=118583: Wed Nov 6 14:40:09 2024 00:35:42.941 read: IOPS=305, BW=1224KiB/s (1253kB/s)(12.0MiB/10081msec) 00:35:42.941 slat (usec): min=3, max=8021, avg=11.51, stdev=144.37 00:35:42.941 clat (msec): min=6, max=131, avg=52.13, stdev=20.38 00:35:42.941 lat (msec): min=6, max=131, avg=52.14, stdev=20.38 00:35:42.941 clat percentiles (msec): 00:35:42.941 | 1.00th=[ 9], 5.00th=[ 23], 10.00th=[ 30], 20.00th=[ 36], 00:35:42.941 | 30.00th=[ 40], 40.00th=[ 47], 50.00th=[ 50], 60.00th=[ 57], 00:35:42.941 | 70.00th=[ 61], 80.00th=[ 64], 90.00th=[ 81], 95.00th=[ 88], 00:35:42.941 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 132], 00:35:42.941 | 99.99th=[ 132] 00:35:42.941 bw ( KiB/s): min= 768, max= 2656, per=4.51%, avg=1227.20, stdev=389.33, samples=20 00:35:42.941 iops : min= 192, max= 664, avg=306.80, stdev=97.33, samples=20 00:35:42.941 lat (msec) : 10=1.04%, 20=2.82%, 50=47.86%, 100=45.36%, 250=2.92% 00:35:42.941 cpu : usr=38.77%, sys=0.51%, ctx=1098, majf=0, minf=1634 00:35:42.941 IO depths : 1=0.9%, 2=2.1%, 4=9.4%, 8=75.0%, 16=12.6%, 32=0.0%, >=64=0.0% 00:35:42.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.941 complete : 0=0.0%, 4=89.7%, 8=5.7%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.941 issued rwts: total=3084,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:42.941 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:42.941 filename2: (groupid=0, jobs=1): err= 0: pid=118584: Wed Nov 6 14:40:09 2024 00:35:42.941 read: IOPS=264, BW=1060KiB/s (1085kB/s)(10.4MiB/10057msec) 00:35:42.941 slat (usec): min=3, max=8203, avg=15.65, stdev=222.00 00:35:42.941 clat (msec): min=18, max=120, avg=60.23, stdev=18.89 00:35:42.941 lat (msec): min=18, max=120, avg=60.25, stdev=18.88 00:35:42.941 clat percentiles (msec): 00:35:42.941 | 1.00th=[ 24], 5.00th=[ 35], 10.00th=[ 36], 20.00th=[ 45], 00:35:42.941 | 30.00th=[ 48], 40.00th=[ 58], 50.00th=[ 61], 60.00th=[ 63], 00:35:42.941 | 70.00th=[ 72], 80.00th=[ 72], 90.00th=[ 85], 95.00th=[ 95], 00:35:42.941 | 99.00th=[ 115], 99.50th=[ 115], 99.90th=[ 121], 99.95th=[ 121], 00:35:42.941 | 99.99th=[ 121] 00:35:42.941 bw ( KiB/s): min= 840, max= 1664, per=3.89%, avg=1059.40, stdev=187.19, samples=20 00:35:42.941 iops : min= 210, max= 416, avg=264.85, stdev=46.80, samples=20 00:35:42.941 lat (msec) : 20=0.30%, 50=34.45%, 100=62.66%, 250=2.59% 00:35:42.941 cpu : usr=32.66%, sys=0.45%, ctx=915, majf=0, minf=1635 00:35:42.941 IO depths : 1=1.2%, 2=2.5%, 4=10.2%, 8=74.1%, 16=12.0%, 32=0.0%, >=64=0.0% 00:35:42.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.941 complete : 0=0.0%, 4=89.9%, 8=5.2%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.941 issued rwts: total=2665,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:42.941 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:42.941 filename2: (groupid=0, jobs=1): err= 0: pid=118585: Wed Nov 6 14:40:09 2024 00:35:42.941 read: IOPS=281, BW=1125KiB/s (1152kB/s)(11.0MiB/10057msec) 00:35:42.941 slat (usec): min=3, max=8018, avg=11.61, stdev=150.66 00:35:42.941 clat (msec): min=20, max=143, avg=56.76, stdev=18.70 00:35:42.941 lat (msec): min=20, max=143, avg=56.77, stdev=18.70 00:35:42.941 clat percentiles (msec): 00:35:42.941 | 1.00th=[ 23], 5.00th=[ 26], 10.00th=[ 36], 20.00th=[ 39], 00:35:42.941 | 30.00th=[ 47], 40.00th=[ 49], 50.00th=[ 59], 60.00th=[ 61], 00:35:42.941 | 70.00th=[ 70], 80.00th=[ 72], 90.00th=[ 84], 95.00th=[ 85], 00:35:42.941 | 99.00th=[ 102], 99.50th=[ 121], 99.90th=[ 144], 99.95th=[ 144], 00:35:42.941 | 99.99th=[ 144] 00:35:42.941 bw ( KiB/s): min= 728, max= 2072, per=4.13%, avg=1124.80, stdev=269.59, samples=20 00:35:42.941 iops : min= 182, max= 518, avg=281.20, stdev=67.40, samples=20 00:35:42.941 lat (msec) : 50=44.27%, 100=54.60%, 250=1.13% 00:35:42.941 cpu : usr=32.68%, sys=0.46%, ctx=893, majf=0, minf=1636 00:35:42.941 IO depths : 1=0.5%, 2=1.3%, 4=8.7%, 8=76.6%, 16=12.9%, 32=0.0%, >=64=0.0% 00:35:42.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.941 complete : 0=0.0%, 4=89.5%, 8=5.8%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.941 issued rwts: total=2828,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:42.941 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:42.941 filename2: (groupid=0, jobs=1): err= 0: pid=118586: Wed Nov 6 14:40:09 2024 00:35:42.941 read: IOPS=266, BW=1065KiB/s (1091kB/s)(10.4MiB/10022msec) 00:35:42.941 slat (usec): min=3, max=8022, avg=15.21, stdev=219.29 00:35:42.941 clat (msec): min=21, max=130, avg=59.99, stdev=18.61 00:35:42.941 lat (msec): min=21, max=130, avg=60.00, stdev=18.62 00:35:42.941 clat percentiles (msec): 00:35:42.941 | 1.00th=[ 24], 5.00th=[ 34], 10.00th=[ 36], 20.00th=[ 46], 00:35:42.941 | 30.00th=[ 48], 40.00th=[ 58], 50.00th=[ 61], 60.00th=[ 61], 00:35:42.941 | 70.00th=[ 72], 80.00th=[ 72], 90.00th=[ 85], 95.00th=[ 87], 00:35:42.941 | 99.00th=[ 111], 99.50th=[ 120], 99.90th=[ 131], 99.95th=[ 131], 00:35:42.941 | 99.99th=[ 131] 00:35:42.941 bw ( KiB/s): min= 784, max= 1464, per=3.90%, avg=1061.10, stdev=178.22, samples=20 00:35:42.941 iops : min= 196, max= 366, avg=265.25, stdev=44.58, samples=20 00:35:42.941 lat (msec) : 50=33.72%, 100=63.96%, 250=2.32% 00:35:42.942 cpu : usr=32.54%, sys=0.50%, ctx=903, majf=0, minf=1635 00:35:42.942 IO depths : 1=0.9%, 2=2.2%, 4=8.5%, 8=75.4%, 16=13.0%, 32=0.0%, >=64=0.0% 00:35:42.942 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.942 complete : 0=0.0%, 4=90.0%, 8=5.7%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.942 issued rwts: total=2669,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:42.942 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:42.942 filename2: (groupid=0, jobs=1): err= 0: pid=118587: Wed Nov 6 14:40:09 2024 00:35:42.942 read: IOPS=260, BW=1044KiB/s (1069kB/s)(10.2MiB/10010msec) 00:35:42.942 slat (usec): min=4, max=6021, avg=13.43, stdev=141.48 00:35:42.942 clat (msec): min=13, max=137, avg=61.19, stdev=18.74 00:35:42.942 lat (msec): min=13, max=138, avg=61.20, stdev=18.74 00:35:42.942 clat percentiles (msec): 00:35:42.942 | 1.00th=[ 24], 5.00th=[ 31], 10.00th=[ 38], 20.00th=[ 48], 00:35:42.942 | 30.00th=[ 54], 40.00th=[ 56], 50.00th=[ 58], 60.00th=[ 64], 00:35:42.942 | 70.00th=[ 71], 80.00th=[ 77], 90.00th=[ 86], 95.00th=[ 94], 00:35:42.942 | 99.00th=[ 117], 99.50th=[ 117], 99.90th=[ 138], 99.95th=[ 138], 00:35:42.942 | 99.99th=[ 138] 00:35:42.942 bw ( KiB/s): min= 768, max= 1664, per=3.78%, avg=1029.74, stdev=184.24, samples=19 00:35:42.942 iops : min= 192, max= 416, avg=257.42, stdev=46.04, samples=19 00:35:42.942 lat (msec) : 20=0.42%, 50=23.12%, 100=73.70%, 250=2.76% 00:35:42.942 cpu : usr=44.17%, sys=0.54%, ctx=1394, majf=0, minf=1633 00:35:42.942 IO depths : 1=2.2%, 2=5.1%, 4=14.5%, 8=67.0%, 16=11.3%, 32=0.0%, >=64=0.0% 00:35:42.942 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.942 complete : 0=0.0%, 4=91.4%, 8=3.8%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.942 issued rwts: total=2612,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:42.942 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:42.942 filename2: (groupid=0, jobs=1): err= 0: pid=118588: Wed Nov 6 14:40:09 2024 00:35:42.942 read: IOPS=288, BW=1156KiB/s (1183kB/s)(11.4MiB/10063msec) 00:35:42.942 slat (usec): min=3, max=8024, avg=18.04, stdev=257.78 00:35:42.942 clat (msec): min=11, max=118, avg=55.19, stdev=17.90 00:35:42.942 lat (msec): min=11, max=118, avg=55.21, stdev=17.90 00:35:42.942 clat percentiles (msec): 00:35:42.942 | 1.00th=[ 16], 5.00th=[ 26], 10.00th=[ 35], 20.00th=[ 38], 00:35:42.942 | 30.00th=[ 48], 40.00th=[ 48], 50.00th=[ 59], 60.00th=[ 61], 00:35:42.942 | 70.00th=[ 61], 80.00th=[ 72], 90.00th=[ 74], 95.00th=[ 85], 00:35:42.942 | 99.00th=[ 97], 99.50th=[ 109], 99.90th=[ 120], 99.95th=[ 120], 00:35:42.942 | 99.99th=[ 120] 00:35:42.942 bw ( KiB/s): min= 944, max= 2016, per=4.25%, avg=1156.40, stdev=243.53, samples=20 00:35:42.942 iops : min= 236, max= 504, avg=289.10, stdev=60.88, samples=20 00:35:42.942 lat (msec) : 20=2.55%, 50=40.90%, 100=55.80%, 250=0.76% 00:35:42.942 cpu : usr=32.65%, sys=0.43%, ctx=893, majf=0, minf=1633 00:35:42.942 IO depths : 1=1.3%, 2=3.0%, 4=10.3%, 8=72.9%, 16=12.5%, 32=0.0%, >=64=0.0% 00:35:42.942 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.942 complete : 0=0.0%, 4=90.3%, 8=5.4%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.942 issued rwts: total=2907,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:42.942 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:42.942 00:35:42.942 Run status group 0 (all jobs): 00:35:42.942 READ: bw=26.6MiB/s (27.9MB/s), 1010KiB/s-1554KiB/s (1034kB/s-1591kB/s), io=268MiB (281MB), run=10002-10088msec 00:35:42.942 ----------------------------------------------------- 00:35:42.942 Suppressions used: 00:35:42.942 count bytes template 00:35:42.942 45 402 /usr/src/fio/parse.c 00:35:42.942 1 8 libtcmalloc_minimal.so 00:35:42.942 1 904 libcrypto.so 00:35:42.942 ----------------------------------------------------- 00:35:42.942 00:35:42.942 14:40:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:35:42.942 14:40:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:42.942 14:40:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:42.942 14:40:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:42.942 14:40:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:42.942 14:40:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:42.942 14:40:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.942 14:40:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:42.942 14:40:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.942 14:40:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:42.942 14:40:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.942 14:40:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:42.942 14:40:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.942 14:40:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:42.942 14:40:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:42.942 14:40:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:42.942 14:40:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:42.942 14:40:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.942 14:40:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:42.942 14:40:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.942 14:40:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:42.942 14:40:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.942 14:40:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:42.942 14:40:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.942 14:40:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:42.942 14:40:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:35:42.942 14:40:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:35:42.942 14:40:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:35:42.942 14:40:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.942 14:40:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:42.942 14:40:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.942 14:40:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:35:42.942 14:40:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.942 14:40:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:42.942 14:40:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.942 14:40:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:35:42.942 14:40:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:35:42.942 14:40:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:35:42.942 14:40:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:35:42.942 14:40:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:35:42.942 14:40:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:35:42.942 14:40:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:35:42.942 14:40:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:42.942 14:40:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:42.942 14:40:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:42.942 14:40:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:42.942 14:40:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:42.942 14:40:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.942 14:40:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:42.942 bdev_null0 00:35:42.942 14:40:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.942 14:40:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:42.942 14:40:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.942 14:40:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:42.942 14:40:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.942 14:40:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:42.942 14:40:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.942 14:40:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:42.942 14:40:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.942 14:40:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:35:42.942 14:40:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.942 14:40:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:42.942 [2024-11-06 14:40:10.127228] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:35:42.942 14:40:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.942 14:40:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:42.942 14:40:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:42.942 14:40:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:42.942 14:40:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:42.942 14:40:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.942 14:40:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:42.942 bdev_null1 00:35:42.942 14:40:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.942 14:40:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:42.942 14:40:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.942 14:40:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:42.942 14:40:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.942 14:40:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:42.942 14:40:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.942 14:40:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:42.942 14:40:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.943 14:40:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:35:42.943 14:40:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.943 14:40:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:42.943 14:40:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.943 14:40:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:35:42.943 14:40:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:35:42.943 14:40:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:42.943 14:40:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:42.943 14:40:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:42.943 14:40:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:35:42.943 14:40:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:35:42.943 14:40:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:42.943 14:40:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:35:42.943 14:40:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:35:42.943 14:40:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:42.943 14:40:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:42.943 14:40:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:35:42.943 14:40:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:42.943 { 00:35:42.943 "params": { 00:35:42.943 "name": "Nvme$subsystem", 00:35:42.943 "trtype": "$TEST_TRANSPORT", 00:35:42.943 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:42.943 "adrfam": "ipv4", 00:35:42.943 "trsvcid": "$NVMF_PORT", 00:35:42.943 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:42.943 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:42.943 "hdgst": ${hdgst:-false}, 00:35:42.943 "ddgst": ${ddgst:-false} 00:35:42.943 }, 00:35:42.943 "method": "bdev_nvme_attach_controller" 00:35:42.943 } 00:35:42.943 EOF 00:35:42.943 )") 00:35:42.943 14:40:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:35:42.943 14:40:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:35:42.943 14:40:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:42.943 14:40:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:35:42.943 14:40:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:42.943 14:40:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:42.943 14:40:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:35:42.943 14:40:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:35:42.943 14:40:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:35:42.943 14:40:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:42.943 14:40:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:42.943 14:40:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:42.943 14:40:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:42.943 14:40:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:42.943 14:40:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:42.943 14:40:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:42.943 { 00:35:42.943 "params": { 00:35:42.943 "name": "Nvme$subsystem", 00:35:42.943 "trtype": "$TEST_TRANSPORT", 00:35:42.943 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:42.943 "adrfam": "ipv4", 00:35:42.943 "trsvcid": "$NVMF_PORT", 00:35:42.943 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:42.943 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:42.943 "hdgst": ${hdgst:-false}, 00:35:42.943 "ddgst": ${ddgst:-false} 00:35:42.943 }, 00:35:42.943 "method": "bdev_nvme_attach_controller" 00:35:42.943 } 00:35:42.943 EOF 00:35:42.943 )") 00:35:42.943 14:40:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:42.943 14:40:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:35:42.943 14:40:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:35:42.943 14:40:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:42.943 "params": { 00:35:42.943 "name": "Nvme0", 00:35:42.943 "trtype": "tcp", 00:35:42.943 "traddr": "10.0.0.3", 00:35:42.943 "adrfam": "ipv4", 00:35:42.943 "trsvcid": "4420", 00:35:42.943 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:42.943 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:42.943 "hdgst": false, 00:35:42.943 "ddgst": false 00:35:42.943 }, 00:35:42.943 "method": "bdev_nvme_attach_controller" 00:35:42.943 },{ 00:35:42.943 "params": { 00:35:42.943 "name": "Nvme1", 00:35:42.943 "trtype": "tcp", 00:35:42.943 "traddr": "10.0.0.3", 00:35:42.943 "adrfam": "ipv4", 00:35:42.943 "trsvcid": "4420", 00:35:42.943 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:42.943 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:42.943 "hdgst": false, 00:35:42.943 "ddgst": false 00:35:42.943 }, 00:35:42.943 "method": "bdev_nvme_attach_controller" 00:35:42.943 }' 00:35:42.943 14:40:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:35:42.943 14:40:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:35:42.943 14:40:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # break 00:35:42.943 14:40:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:35:42.943 14:40:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:42.943 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:42.943 ... 00:35:42.943 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:42.943 ... 00:35:42.943 fio-3.35 00:35:42.943 Starting 4 threads 00:35:49.506 00:35:49.506 filename0: (groupid=0, jobs=1): err= 0: pid=118718: Wed Nov 6 14:40:16 2024 00:35:49.506 read: IOPS=2496, BW=19.5MiB/s (20.4MB/s)(97.6MiB/5003msec) 00:35:49.506 slat (nsec): min=2970, max=36527, avg=6980.04, stdev=1762.26 00:35:49.506 clat (usec): min=2075, max=3358, avg=3168.20, stdev=43.27 00:35:49.506 lat (usec): min=2079, max=3369, avg=3175.18, stdev=43.67 00:35:49.506 clat percentiles (usec): 00:35:49.506 | 1.00th=[ 3097], 5.00th=[ 3130], 10.00th=[ 3130], 20.00th=[ 3163], 00:35:49.506 | 30.00th=[ 3163], 40.00th=[ 3163], 50.00th=[ 3163], 60.00th=[ 3163], 00:35:49.506 | 70.00th=[ 3195], 80.00th=[ 3195], 90.00th=[ 3195], 95.00th=[ 3195], 00:35:49.506 | 99.00th=[ 3261], 99.50th=[ 3294], 99.90th=[ 3294], 99.95th=[ 3326], 00:35:49.506 | 99.99th=[ 3359] 00:35:49.506 bw ( KiB/s): min=19928, max=20096, per=25.02%, avg=19977.78, stdev=46.26, samples=9 00:35:49.506 iops : min= 2491, max= 2512, avg=2497.22, stdev= 5.78, samples=9 00:35:49.506 lat (msec) : 4=100.00% 00:35:49.506 cpu : usr=96.46%, sys=2.64%, ctx=10, majf=0, minf=1634 00:35:49.506 IO depths : 1=12.4%, 2=25.0%, 4=50.0%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:49.506 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.506 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.506 issued rwts: total=12488,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:49.506 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:49.506 filename0: (groupid=0, jobs=1): err= 0: pid=118719: Wed Nov 6 14:40:16 2024 00:35:49.506 read: IOPS=2495, BW=19.5MiB/s (20.4MB/s)(97.5MiB/5002msec) 00:35:49.506 slat (nsec): min=3080, max=52595, avg=14267.47, stdev=6987.19 00:35:49.506 clat (usec): min=2307, max=5148, avg=3136.45, stdev=91.43 00:35:49.506 lat (usec): min=2317, max=5161, avg=3150.72, stdev=91.63 00:35:49.506 clat percentiles (usec): 00:35:49.506 | 1.00th=[ 3032], 5.00th=[ 3064], 10.00th=[ 3064], 20.00th=[ 3097], 00:35:49.506 | 30.00th=[ 3097], 40.00th=[ 3130], 50.00th=[ 3130], 60.00th=[ 3163], 00:35:49.506 | 70.00th=[ 3163], 80.00th=[ 3163], 90.00th=[ 3195], 95.00th=[ 3195], 00:35:49.506 | 99.00th=[ 3261], 99.50th=[ 3752], 99.90th=[ 3818], 99.95th=[ 4359], 00:35:49.506 | 99.99th=[ 4424] 00:35:49.507 bw ( KiB/s): min=19879, max=20080, per=25.01%, avg=19966.11, stdev=53.09, samples=9 00:35:49.507 iops : min= 2484, max= 2510, avg=2495.67, stdev= 6.82, samples=9 00:35:49.507 lat (msec) : 4=99.94%, 10=0.06% 00:35:49.507 cpu : usr=96.78%, sys=2.34%, ctx=90, majf=0, minf=1636 00:35:49.507 IO depths : 1=11.9%, 2=25.0%, 4=50.0%, 8=13.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:49.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.507 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.507 issued rwts: total=12480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:49.507 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:49.507 filename1: (groupid=0, jobs=1): err= 0: pid=118720: Wed Nov 6 14:40:16 2024 00:35:49.507 read: IOPS=2495, BW=19.5MiB/s (20.4MB/s)(97.5MiB/5002msec) 00:35:49.507 slat (nsec): min=2977, max=52891, avg=13286.84, stdev=7160.53 00:35:49.507 clat (usec): min=1698, max=5334, avg=3137.97, stdev=106.90 00:35:49.507 lat (usec): min=1708, max=5343, avg=3151.26, stdev=107.27 00:35:49.507 clat percentiles (usec): 00:35:49.507 | 1.00th=[ 3032], 5.00th=[ 3064], 10.00th=[ 3097], 20.00th=[ 3097], 00:35:49.507 | 30.00th=[ 3130], 40.00th=[ 3130], 50.00th=[ 3130], 60.00th=[ 3163], 00:35:49.507 | 70.00th=[ 3163], 80.00th=[ 3163], 90.00th=[ 3195], 95.00th=[ 3195], 00:35:49.507 | 99.00th=[ 3228], 99.50th=[ 3294], 99.90th=[ 4817], 99.95th=[ 4817], 00:35:49.507 | 99.99th=[ 5276] 00:35:49.507 bw ( KiB/s): min=19840, max=20096, per=25.01%, avg=19968.00, stdev=88.00, samples=9 00:35:49.507 iops : min= 2480, max= 2512, avg=2496.00, stdev=11.00, samples=9 00:35:49.507 lat (msec) : 2=0.02%, 4=99.80%, 10=0.18% 00:35:49.507 cpu : usr=96.04%, sys=2.86%, ctx=6, majf=0, minf=1636 00:35:49.507 IO depths : 1=12.0%, 2=25.0%, 4=50.0%, 8=13.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:49.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.507 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.507 issued rwts: total=12480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:49.507 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:49.507 filename1: (groupid=0, jobs=1): err= 0: pid=118721: Wed Nov 6 14:40:16 2024 00:35:49.507 read: IOPS=2495, BW=19.5MiB/s (20.4MB/s)(97.5MiB/5001msec) 00:35:49.507 slat (nsec): min=3890, max=51667, avg=13196.07, stdev=7926.12 00:35:49.507 clat (usec): min=2443, max=3995, avg=3148.46, stdev=69.82 00:35:49.507 lat (usec): min=2452, max=4005, avg=3161.65, stdev=68.11 00:35:49.507 clat percentiles (usec): 00:35:49.507 | 1.00th=[ 3032], 5.00th=[ 3064], 10.00th=[ 3097], 20.00th=[ 3097], 00:35:49.507 | 30.00th=[ 3130], 40.00th=[ 3130], 50.00th=[ 3163], 60.00th=[ 3163], 00:35:49.507 | 70.00th=[ 3163], 80.00th=[ 3195], 90.00th=[ 3195], 95.00th=[ 3228], 00:35:49.507 | 99.00th=[ 3261], 99.50th=[ 3294], 99.90th=[ 3851], 99.95th=[ 3884], 00:35:49.507 | 99.99th=[ 3916] 00:35:49.507 bw ( KiB/s): min=19928, max=20096, per=25.02%, avg=19977.78, stdev=46.26, samples=9 00:35:49.507 iops : min= 2491, max= 2512, avg=2497.22, stdev= 5.78, samples=9 00:35:49.507 lat (msec) : 4=100.00% 00:35:49.507 cpu : usr=95.56%, sys=3.38%, ctx=52, majf=0, minf=1636 00:35:49.507 IO depths : 1=12.2%, 2=25.0%, 4=50.0%, 8=12.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:49.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.507 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.507 issued rwts: total=12480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:49.507 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:49.507 00:35:49.507 Run status group 0 (all jobs): 00:35:49.507 READ: bw=78.0MiB/s (81.8MB/s), 19.5MiB/s-19.5MiB/s (20.4MB/s-20.4MB/s), io=390MiB (409MB), run=5001-5003msec 00:35:49.507 ----------------------------------------------------- 00:35:49.507 Suppressions used: 00:35:49.507 count bytes template 00:35:49.507 6 52 /usr/src/fio/parse.c 00:35:49.507 1 8 libtcmalloc_minimal.so 00:35:49.507 1 904 libcrypto.so 00:35:49.507 ----------------------------------------------------- 00:35:49.507 00:35:49.507 14:40:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:35:49.507 14:40:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:49.507 14:40:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:49.507 14:40:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:49.507 14:40:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:49.507 14:40:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:49.507 14:40:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.507 14:40:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:49.507 14:40:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.507 14:40:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:49.507 14:40:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.507 14:40:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:49.507 14:40:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.507 14:40:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:49.507 14:40:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:49.507 14:40:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:49.507 14:40:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:49.507 14:40:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.507 14:40:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:49.507 14:40:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.507 14:40:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:49.507 14:40:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.507 14:40:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:49.507 14:40:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.507 00:35:49.507 real 0m26.096s 00:35:49.507 user 2m11.610s 00:35:49.507 sys 0m3.289s 00:35:49.507 ************************************ 00:35:49.507 14:40:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:49.507 14:40:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:49.507 END TEST fio_dif_rand_params 00:35:49.507 ************************************ 00:35:49.507 14:40:17 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:35:49.507 14:40:17 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:35:49.507 14:40:17 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:49.507 14:40:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:49.767 ************************************ 00:35:49.767 START TEST fio_dif_digest 00:35:49.767 ************************************ 00:35:49.767 14:40:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1127 -- # fio_dif_digest 00:35:49.767 14:40:17 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:35:49.767 14:40:17 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:35:49.767 14:40:17 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:35:49.767 14:40:17 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:35:49.767 14:40:17 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:35:49.767 14:40:17 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:35:49.767 14:40:17 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:35:49.767 14:40:17 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:35:49.767 14:40:17 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:35:49.767 14:40:17 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:35:49.767 14:40:17 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:35:49.767 14:40:17 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:35:49.767 14:40:17 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:35:49.767 14:40:17 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:35:49.767 14:40:17 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:35:49.767 14:40:17 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:49.767 14:40:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.767 14:40:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:49.767 bdev_null0 00:35:49.767 14:40:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.767 14:40:17 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:49.767 14:40:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.767 14:40:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:49.767 14:40:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.767 14:40:17 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:49.767 14:40:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.767 14:40:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:49.767 14:40:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.767 14:40:17 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:35:49.767 14:40:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.767 14:40:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:49.767 [2024-11-06 14:40:17.172564] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:35:49.767 14:40:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.767 14:40:17 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:35:49.767 14:40:17 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:35:49.767 14:40:17 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:49.767 14:40:17 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:49.767 14:40:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:49.767 14:40:17 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:35:49.767 14:40:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:35:49.767 14:40:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:35:49.767 14:40:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:49.767 14:40:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:35:49.767 14:40:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local sanitizers 00:35:49.767 14:40:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:49.767 14:40:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:35:49.767 14:40:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # shift 00:35:49.767 14:40:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:49.767 { 00:35:49.767 "params": { 00:35:49.767 "name": "Nvme$subsystem", 00:35:49.767 "trtype": "$TEST_TRANSPORT", 00:35:49.767 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:49.767 "adrfam": "ipv4", 00:35:49.767 "trsvcid": "$NVMF_PORT", 00:35:49.767 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:49.767 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:49.767 "hdgst": ${hdgst:-false}, 00:35:49.767 "ddgst": ${ddgst:-false} 00:35:49.767 }, 00:35:49.767 "method": "bdev_nvme_attach_controller" 00:35:49.767 } 00:35:49.767 EOF 00:35:49.767 )") 00:35:49.767 14:40:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # local asan_lib= 00:35:49.767 14:40:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:35:49.767 14:40:17 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:35:49.767 14:40:17 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:35:49.767 14:40:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:35:49.768 14:40:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:35:49.768 14:40:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libasan 00:35:49.768 14:40:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:35:49.768 14:40:17 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:35:49.768 14:40:17 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:35:49.768 14:40:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:35:49.768 14:40:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:35:49.768 14:40:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:49.768 "params": { 00:35:49.768 "name": "Nvme0", 00:35:49.768 "trtype": "tcp", 00:35:49.768 "traddr": "10.0.0.3", 00:35:49.768 "adrfam": "ipv4", 00:35:49.768 "trsvcid": "4420", 00:35:49.768 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:49.768 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:49.768 "hdgst": true, 00:35:49.768 "ddgst": true 00:35:49.768 }, 00:35:49.768 "method": "bdev_nvme_attach_controller" 00:35:49.768 }' 00:35:49.768 14:40:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:35:49.768 14:40:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:35:49.768 14:40:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # break 00:35:49.768 14:40:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:35:49.768 14:40:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:49.768 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:49.768 ... 00:35:49.768 fio-3.35 00:35:49.768 Starting 3 threads 00:36:02.034 00:36:02.034 filename0: (groupid=0, jobs=1): err= 0: pid=118831: Wed Nov 6 14:40:28 2024 00:36:02.034 read: IOPS=243, BW=30.5MiB/s (32.0MB/s)(305MiB/10004msec) 00:36:02.034 slat (nsec): min=6459, max=52592, avg=11462.98, stdev=1881.19 00:36:02.034 clat (usec): min=7234, max=16083, avg=12285.08, stdev=2382.54 00:36:02.034 lat (usec): min=7244, max=16112, avg=12296.54, stdev=2382.76 00:36:02.034 clat percentiles (usec): 00:36:02.034 | 1.00th=[ 7832], 5.00th=[ 8160], 10.00th=[ 8356], 20.00th=[ 8979], 00:36:02.034 | 30.00th=[11338], 40.00th=[12911], 50.00th=[13304], 60.00th=[13698], 00:36:02.034 | 70.00th=[13960], 80.00th=[14222], 90.00th=[14615], 95.00th=[14877], 00:36:02.034 | 99.00th=[15270], 99.50th=[15401], 99.90th=[15926], 99.95th=[15926], 00:36:02.034 | 99.99th=[16057] 00:36:02.035 bw ( KiB/s): min=28416, max=34560, per=31.45%, avg=31434.11, stdev=1891.97, samples=19 00:36:02.035 iops : min= 222, max= 270, avg=245.58, stdev=14.78, samples=19 00:36:02.035 lat (msec) : 10=26.64%, 20=73.36% 00:36:02.035 cpu : usr=94.56%, sys=4.23%, ctx=19, majf=0, minf=1636 00:36:02.035 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:02.035 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:02.035 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:02.035 issued rwts: total=2440,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:02.035 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:02.035 filename0: (groupid=0, jobs=1): err= 0: pid=118832: Wed Nov 6 14:40:28 2024 00:36:02.035 read: IOPS=242, BW=30.4MiB/s (31.8MB/s)(304MiB/10008msec) 00:36:02.035 slat (usec): min=6, max=105, avg=11.30, stdev= 3.71 00:36:02.035 clat (usec): min=4516, max=52107, avg=12335.73, stdev=9774.41 00:36:02.035 lat (usec): min=4523, max=52120, avg=12347.03, stdev=9774.32 00:36:02.035 clat percentiles (usec): 00:36:02.035 | 1.00th=[ 8291], 5.00th=[ 8848], 10.00th=[ 9110], 20.00th=[ 9372], 00:36:02.035 | 30.00th=[ 9634], 40.00th=[ 9765], 50.00th=[ 9896], 60.00th=[10028], 00:36:02.035 | 70.00th=[10159], 80.00th=[10421], 90.00th=[10945], 95.00th=[49546], 00:36:02.035 | 99.00th=[51119], 99.50th=[51643], 99.90th=[51643], 99.95th=[51643], 00:36:02.035 | 99.99th=[52167] 00:36:02.035 bw ( KiB/s): min=22784, max=38912, per=30.77%, avg=30760.42, stdev=4703.61, samples=19 00:36:02.035 iops : min= 178, max= 304, avg=240.32, stdev=36.75, samples=19 00:36:02.035 lat (msec) : 10=55.82%, 20=38.01%, 50=2.18%, 100=3.99% 00:36:02.035 cpu : usr=94.71%, sys=4.01%, ctx=177, majf=0, minf=1636 00:36:02.035 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:02.035 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:02.035 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:02.035 issued rwts: total=2431,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:02.035 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:02.035 filename0: (groupid=0, jobs=1): err= 0: pid=118833: Wed Nov 6 14:40:28 2024 00:36:02.035 read: IOPS=294, BW=36.8MiB/s (38.6MB/s)(368MiB/10003msec) 00:36:02.035 slat (nsec): min=6387, max=87033, avg=10584.03, stdev=2943.24 00:36:02.035 clat (usec): min=5518, max=15453, avg=10178.27, stdev=2052.00 00:36:02.035 lat (usec): min=5528, max=15540, avg=10188.85, stdev=2052.16 00:36:02.035 clat percentiles (usec): 00:36:02.035 | 1.00th=[ 6259], 5.00th=[ 6652], 10.00th=[ 6980], 20.00th=[ 7439], 00:36:02.035 | 30.00th=[ 9372], 40.00th=[10552], 50.00th=[10945], 60.00th=[11207], 00:36:02.035 | 70.00th=[11600], 80.00th=[11863], 90.00th=[12387], 95.00th=[12649], 00:36:02.035 | 99.00th=[13304], 99.50th=[13566], 99.90th=[14746], 99.95th=[15401], 00:36:02.035 | 99.99th=[15401] 00:36:02.035 bw ( KiB/s): min=34048, max=42240, per=37.87%, avg=37847.58, stdev=2137.19, samples=19 00:36:02.035 iops : min= 266, max= 330, avg=295.68, stdev=16.70, samples=19 00:36:02.035 lat (msec) : 10=33.73%, 20=66.27% 00:36:02.035 cpu : usr=94.92%, sys=3.96%, ctx=12, majf=0, minf=1634 00:36:02.035 IO depths : 1=2.3%, 2=97.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:02.035 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:02.035 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:02.035 issued rwts: total=2944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:02.035 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:02.035 00:36:02.035 Run status group 0 (all jobs): 00:36:02.035 READ: bw=97.6MiB/s (102MB/s), 30.4MiB/s-36.8MiB/s (31.8MB/s-38.6MB/s), io=977MiB (1024MB), run=10003-10008msec 00:36:02.035 ----------------------------------------------------- 00:36:02.035 Suppressions used: 00:36:02.035 count bytes template 00:36:02.035 5 44 /usr/src/fio/parse.c 00:36:02.035 1 8 libtcmalloc_minimal.so 00:36:02.035 1 904 libcrypto.so 00:36:02.035 ----------------------------------------------------- 00:36:02.035 00:36:02.035 14:40:28 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:36:02.035 14:40:28 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:36:02.035 14:40:28 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:36:02.035 14:40:28 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:02.035 14:40:28 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:36:02.035 14:40:28 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:02.035 14:40:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.035 14:40:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:02.035 14:40:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.035 14:40:28 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:02.035 14:40:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.035 14:40:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:02.035 14:40:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.035 ************************************ 00:36:02.035 END TEST fio_dif_digest 00:36:02.035 ************************************ 00:36:02.035 00:36:02.035 real 0m11.761s 00:36:02.035 user 0m29.781s 00:36:02.035 sys 0m1.495s 00:36:02.035 14:40:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:02.035 14:40:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:02.035 14:40:28 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:36:02.035 14:40:28 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:36:02.035 14:40:28 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:02.035 14:40:28 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:36:02.035 14:40:28 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:02.035 14:40:28 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:36:02.035 14:40:28 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:02.035 14:40:28 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:02.035 rmmod nvme_tcp 00:36:02.035 rmmod nvme_fabrics 00:36:02.035 rmmod nvme_keyring 00:36:02.035 14:40:29 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:02.035 14:40:29 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:36:02.035 14:40:29 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:36:02.035 14:40:29 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 118066 ']' 00:36:02.035 14:40:29 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 118066 00:36:02.035 14:40:29 nvmf_dif -- common/autotest_common.sh@952 -- # '[' -z 118066 ']' 00:36:02.035 14:40:29 nvmf_dif -- common/autotest_common.sh@956 -- # kill -0 118066 00:36:02.035 14:40:29 nvmf_dif -- common/autotest_common.sh@957 -- # uname 00:36:02.035 14:40:29 nvmf_dif -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:02.035 14:40:29 nvmf_dif -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 118066 00:36:02.035 14:40:29 nvmf_dif -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:36:02.035 killing process with pid 118066 00:36:02.035 14:40:29 nvmf_dif -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:36:02.035 14:40:29 nvmf_dif -- common/autotest_common.sh@970 -- # echo 'killing process with pid 118066' 00:36:02.035 14:40:29 nvmf_dif -- common/autotest_common.sh@971 -- # kill 118066 00:36:02.035 14:40:29 nvmf_dif -- common/autotest_common.sh@976 -- # wait 118066 00:36:02.295 14:40:29 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:36:02.295 14:40:29 nvmf_dif -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:36:02.554 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:02.554 Waiting for block devices as requested 00:36:02.554 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:36:02.814 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:36:02.814 14:40:30 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:02.814 14:40:30 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:02.814 14:40:30 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:36:02.814 14:40:30 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:36:02.814 14:40:30 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:36:02.814 14:40:30 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:02.814 14:40:30 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:02.814 14:40:30 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:36:02.814 14:40:30 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:36:02.814 14:40:30 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:36:02.814 14:40:30 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:36:02.814 14:40:30 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:36:02.814 14:40:30 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:36:02.814 14:40:30 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:36:02.814 14:40:30 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:36:02.814 14:40:30 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:36:02.814 14:40:30 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:36:03.075 14:40:30 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:36:03.075 14:40:30 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:36:03.075 14:40:30 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:36:03.075 14:40:30 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:36:03.075 14:40:30 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:36:03.075 14:40:30 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:03.075 14:40:30 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:03.075 14:40:30 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:03.075 14:40:30 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:36:03.075 00:36:03.075 real 1m5.483s 00:36:03.075 user 4m7.458s 00:36:03.075 sys 0m10.956s 00:36:03.075 14:40:30 nvmf_dif -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:03.075 ************************************ 00:36:03.075 END TEST nvmf_dif 00:36:03.075 ************************************ 00:36:03.075 14:40:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:03.075 14:40:30 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:03.075 14:40:30 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:36:03.075 14:40:30 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:03.075 14:40:30 -- common/autotest_common.sh@10 -- # set +x 00:36:03.075 ************************************ 00:36:03.075 START TEST nvmf_abort_qd_sizes 00:36:03.075 ************************************ 00:36:03.075 14:40:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:03.075 * Looking for test storage... 00:36:03.075 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:36:03.075 14:40:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:03.075 14:40:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lcov --version 00:36:03.075 14:40:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:03.334 14:40:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:03.334 14:40:30 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:03.334 14:40:30 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:03.334 14:40:30 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:03.334 14:40:30 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:36:03.334 14:40:30 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:36:03.334 14:40:30 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:36:03.334 14:40:30 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:36:03.334 14:40:30 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:36:03.334 14:40:30 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:36:03.334 14:40:30 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:36:03.334 14:40:30 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:03.334 14:40:30 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:36:03.334 14:40:30 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:36:03.334 14:40:30 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:03.334 14:40:30 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:03.334 14:40:30 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:36:03.334 14:40:30 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:36:03.334 14:40:30 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:03.334 14:40:30 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:36:03.334 14:40:30 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:36:03.334 14:40:30 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:36:03.334 14:40:30 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:36:03.334 14:40:30 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:03.334 14:40:30 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:36:03.334 14:40:30 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:36:03.334 14:40:30 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:03.334 14:40:30 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:03.334 14:40:30 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:36:03.334 14:40:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:03.334 14:40:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:03.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:03.334 --rc genhtml_branch_coverage=1 00:36:03.334 --rc genhtml_function_coverage=1 00:36:03.334 --rc genhtml_legend=1 00:36:03.334 --rc geninfo_all_blocks=1 00:36:03.334 --rc geninfo_unexecuted_blocks=1 00:36:03.334 00:36:03.334 ' 00:36:03.334 14:40:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:03.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:03.334 --rc genhtml_branch_coverage=1 00:36:03.334 --rc genhtml_function_coverage=1 00:36:03.334 --rc genhtml_legend=1 00:36:03.334 --rc geninfo_all_blocks=1 00:36:03.334 --rc geninfo_unexecuted_blocks=1 00:36:03.334 00:36:03.334 ' 00:36:03.334 14:40:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:03.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:03.334 --rc genhtml_branch_coverage=1 00:36:03.334 --rc genhtml_function_coverage=1 00:36:03.334 --rc genhtml_legend=1 00:36:03.334 --rc geninfo_all_blocks=1 00:36:03.334 --rc geninfo_unexecuted_blocks=1 00:36:03.334 00:36:03.334 ' 00:36:03.334 14:40:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:03.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:03.334 --rc genhtml_branch_coverage=1 00:36:03.334 --rc genhtml_function_coverage=1 00:36:03.334 --rc genhtml_legend=1 00:36:03.334 --rc geninfo_all_blocks=1 00:36:03.334 --rc geninfo_unexecuted_blocks=1 00:36:03.334 00:36:03.334 ' 00:36:03.334 14:40:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:36:03.334 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:36:03.334 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:03.334 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:03.334 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:03.334 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:03.334 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:03.334 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:03.334 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:03.334 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:03.334 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:03.334 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:03.334 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:36:03.334 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:36:03.334 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:03.334 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:03.334 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:36:03.334 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:03.334 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:36:03.334 14:40:30 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:36:03.334 14:40:30 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:03.334 14:40:30 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:03.334 14:40:30 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:03.334 14:40:30 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:03.334 14:40:30 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:03.335 14:40:30 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:03.335 14:40:30 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:36:03.335 14:40:30 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:03.335 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:36:03.335 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:03.335 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:03.335 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:03.335 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:03.335 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:03.335 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:03.335 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:03.335 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:03.335 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:03.335 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:03.335 14:40:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:36:03.335 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:03.335 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:03.335 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:03.335 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:03.335 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:03.335 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:03.335 14:40:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:03.335 14:40:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:03.335 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:36:03.335 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:36:03.335 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:36:03.335 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:36:03.335 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:36:03.335 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@460 -- # nvmf_veth_init 00:36:03.335 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:03.335 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:36:03.335 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:36:03.335 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:36:03.335 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:03.335 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:36:03.335 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:36:03.335 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:36:03.335 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:36:03.335 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:36:03.335 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:36:03.335 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:03.335 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:36:03.335 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:36:03.335 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:36:03.335 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:36:03.335 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:36:03.335 Cannot find device "nvmf_init_br" 00:36:03.335 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:36:03.335 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:36:03.335 Cannot find device "nvmf_init_br2" 00:36:03.335 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:36:03.335 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:36:03.335 Cannot find device "nvmf_tgt_br" 00:36:03.335 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:36:03.335 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:36:03.335 Cannot find device "nvmf_tgt_br2" 00:36:03.335 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:36:03.335 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:36:03.335 Cannot find device "nvmf_init_br" 00:36:03.335 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:36:03.335 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:36:03.335 Cannot find device "nvmf_init_br2" 00:36:03.335 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:36:03.335 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:36:03.335 Cannot find device "nvmf_tgt_br" 00:36:03.335 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:36:03.335 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:36:03.335 Cannot find device "nvmf_tgt_br2" 00:36:03.335 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:36:03.335 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:36:03.335 Cannot find device "nvmf_br" 00:36:03.335 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:36:03.335 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:36:03.335 Cannot find device "nvmf_init_if" 00:36:03.335 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:36:03.335 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:36:03.335 Cannot find device "nvmf_init_if2" 00:36:03.335 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:36:03.335 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:36:03.335 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:03.335 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:36:03.335 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:36:03.335 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:03.335 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:36:03.335 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:36:03.335 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:36:03.335 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:36:03.335 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:36:03.335 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:36:03.335 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:36:03.335 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:36:03.594 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:36:03.594 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:36:03.594 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:36:03.594 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:36:03.594 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:36:03.594 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:36:03.594 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:36:03.594 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:36:03.594 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:36:03.594 14:40:30 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:36:03.594 14:40:31 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:36:03.594 14:40:31 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:36:03.594 14:40:31 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:36:03.594 14:40:31 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:36:03.594 14:40:31 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:36:03.594 14:40:31 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:36:03.594 14:40:31 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:36:03.594 14:40:31 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:36:03.594 14:40:31 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:36:03.594 14:40:31 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:36:03.594 14:40:31 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:36:03.594 14:40:31 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:36:03.594 14:40:31 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:36:03.594 14:40:31 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:36:03.594 14:40:31 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:36:03.594 14:40:31 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:36:03.594 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:36:03.594 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:36:03.594 00:36:03.594 --- 10.0.0.3 ping statistics --- 00:36:03.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:03.594 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:36:03.594 14:40:31 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:36:03.594 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:36:03.595 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.032 ms 00:36:03.595 00:36:03.595 --- 10.0.0.4 ping statistics --- 00:36:03.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:03.595 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:36:03.595 14:40:31 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:36:03.595 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:03.595 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.013 ms 00:36:03.595 00:36:03.595 --- 10.0.0.1 ping statistics --- 00:36:03.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:03.595 rtt min/avg/max/mdev = 0.013/0.013/0.013/0.000 ms 00:36:03.595 14:40:31 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:36:03.595 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:03.595 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:36:03.595 00:36:03.595 --- 10.0.0.2 ping statistics --- 00:36:03.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:03.595 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:36:03.595 14:40:31 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:03.595 14:40:31 nvmf_abort_qd_sizes -- nvmf/common.sh@461 -- # return 0 00:36:03.595 14:40:31 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:36:03.595 14:40:31 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:36:04.165 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:04.165 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:36:04.165 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:36:04.165 14:40:31 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:04.165 14:40:31 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:04.165 14:40:31 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:04.165 14:40:31 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:04.165 14:40:31 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:04.165 14:40:31 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:04.425 14:40:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:36:04.425 14:40:31 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:04.425 14:40:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:04.425 14:40:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:04.425 14:40:31 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=119494 00:36:04.425 14:40:31 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 119494 00:36:04.425 14:40:31 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:36:04.425 14:40:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # '[' -z 119494 ']' 00:36:04.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:04.425 14:40:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:04.425 14:40:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:04.425 14:40:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:04.425 14:40:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:04.425 14:40:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:04.425 [2024-11-06 14:40:31.885624] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:36:04.425 [2024-11-06 14:40:31.885707] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:04.425 [2024-11-06 14:40:32.036089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:04.684 [2024-11-06 14:40:32.137168] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:04.684 [2024-11-06 14:40:32.137209] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:04.684 [2024-11-06 14:40:32.137220] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:04.684 [2024-11-06 14:40:32.137228] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:04.684 [2024-11-06 14:40:32.137238] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:04.684 [2024-11-06 14:40:32.138729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:04.684 [2024-11-06 14:40:32.139179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:04.684 [2024-11-06 14:40:32.139340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:04.684 [2024-11-06 14:40:32.139453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:05.251 14:40:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:05.251 14:40:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@866 -- # return 0 00:36:05.251 14:40:32 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:05.251 14:40:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:05.251 14:40:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:05.251 14:40:32 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:05.251 14:40:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:36:05.251 14:40:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:36:05.251 14:40:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:36:05.251 14:40:32 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:36:05.251 14:40:32 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:36:05.251 14:40:32 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:36:05.251 14:40:32 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:36:05.251 14:40:32 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:36:05.251 14:40:32 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:36:05.251 14:40:32 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:36:05.251 14:40:32 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:36:05.251 14:40:32 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:36:05.251 14:40:32 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:36:05.251 14:40:32 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:36:05.251 14:40:32 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:36:05.251 14:40:32 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:36:05.251 14:40:32 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:36:05.251 14:40:32 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:36:05.251 14:40:32 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:36:05.251 14:40:32 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:36:05.251 14:40:32 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:36:05.251 14:40:32 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:36:05.251 14:40:32 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:36:05.251 14:40:32 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:36:05.251 14:40:32 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:36:05.251 14:40:32 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:36:05.251 14:40:32 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:36:05.251 14:40:32 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:36:05.251 14:40:32 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:36:05.251 14:40:32 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:36:05.251 14:40:32 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:36:05.251 14:40:32 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:36:05.251 14:40:32 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:36:05.251 14:40:32 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:36:05.251 14:40:32 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:36:05.251 14:40:32 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:36:05.251 14:40:32 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:36:05.251 14:40:32 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:36:05.251 14:40:32 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:36:05.251 14:40:32 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:36:05.251 14:40:32 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:36:05.251 14:40:32 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:36:05.251 14:40:32 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:36:05.251 14:40:32 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:36:05.251 14:40:32 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:36:05.251 14:40:32 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:36:05.251 14:40:32 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:36:05.251 14:40:32 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:36:05.251 14:40:32 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:36:05.251 14:40:32 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:36:05.251 14:40:32 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:36:05.251 14:40:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:36:05.251 14:40:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:36:05.251 14:40:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:36:05.251 14:40:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:36:05.251 14:40:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:05.251 14:40:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:05.251 ************************************ 00:36:05.251 START TEST spdk_target_abort 00:36:05.251 ************************************ 00:36:05.251 14:40:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1127 -- # spdk_target 00:36:05.252 14:40:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:36:05.252 14:40:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:36:05.252 14:40:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.252 14:40:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:05.252 spdk_targetn1 00:36:05.252 14:40:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:05.252 14:40:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:05.252 14:40:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.252 14:40:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:05.252 [2024-11-06 14:40:32.826587] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:05.252 14:40:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:05.252 14:40:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:36:05.252 14:40:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.252 14:40:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:05.252 14:40:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:05.252 14:40:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:36:05.252 14:40:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.252 14:40:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:05.252 14:40:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:05.252 14:40:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:36:05.252 14:40:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.252 14:40:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:05.252 [2024-11-06 14:40:32.877711] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:36:05.252 14:40:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:05.252 14:40:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:36:05.252 14:40:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:05.252 14:40:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:05.252 14:40:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:36:05.252 14:40:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:05.252 14:40:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:05.252 14:40:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:05.252 14:40:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:05.252 14:40:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:05.252 14:40:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:05.252 14:40:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:05.252 14:40:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:05.252 14:40:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:05.252 14:40:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:05.252 14:40:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:36:05.252 14:40:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:05.252 14:40:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:36:05.252 14:40:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:05.252 14:40:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:05.252 14:40:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:05.252 14:40:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:08.574 Initializing NVMe Controllers 00:36:08.574 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:36:08.574 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:08.574 Initialization complete. Launching workers. 00:36:08.574 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 13441, failed: 0 00:36:08.574 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1082, failed to submit 12359 00:36:08.574 success 786, unsuccessful 296, failed 0 00:36:08.574 14:40:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:08.574 14:40:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:12.759 Initializing NVMe Controllers 00:36:12.759 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:36:12.759 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:12.759 Initialization complete. Launching workers. 00:36:12.759 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 5988, failed: 0 00:36:12.759 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1228, failed to submit 4760 00:36:12.759 success 255, unsuccessful 973, failed 0 00:36:12.759 14:40:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:12.759 14:40:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:15.290 Initializing NVMe Controllers 00:36:15.290 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:36:15.290 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:15.290 Initialization complete. Launching workers. 00:36:15.290 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30657, failed: 0 00:36:15.290 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2585, failed to submit 28072 00:36:15.290 success 430, unsuccessful 2155, failed 0 00:36:15.550 14:40:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:36:15.551 14:40:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.551 14:40:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:15.551 14:40:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:15.551 14:40:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:36:15.551 14:40:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.551 14:40:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:16.121 14:40:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:16.121 14:40:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 119494 00:36:16.121 14:40:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # '[' -z 119494 ']' 00:36:16.121 14:40:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # kill -0 119494 00:36:16.122 14:40:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # uname 00:36:16.122 14:40:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:16.122 14:40:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 119494 00:36:16.122 killing process with pid 119494 00:36:16.122 14:40:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:36:16.122 14:40:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:36:16.122 14:40:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 119494' 00:36:16.122 14:40:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@971 -- # kill 119494 00:36:16.122 14:40:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@976 -- # wait 119494 00:36:16.690 00:36:16.690 real 0m11.510s 00:36:16.690 user 0m44.751s 00:36:16.690 sys 0m1.445s 00:36:16.690 14:40:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:16.690 ************************************ 00:36:16.690 END TEST spdk_target_abort 00:36:16.690 ************************************ 00:36:16.690 14:40:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:16.690 14:40:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:36:16.690 14:40:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:36:16.690 14:40:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:16.690 14:40:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:16.690 ************************************ 00:36:16.690 START TEST kernel_target_abort 00:36:16.690 ************************************ 00:36:16.690 14:40:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1127 -- # kernel_target 00:36:16.690 14:40:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:36:16.690 14:40:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:36:16.690 14:40:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:16.690 14:40:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:16.690 14:40:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:16.690 14:40:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:16.690 14:40:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:16.690 14:40:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:16.690 14:40:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:16.690 14:40:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:16.690 14:40:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:16.690 14:40:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:36:16.690 14:40:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:36:16.690 14:40:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:36:16.690 14:40:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:16.690 14:40:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:16.690 14:40:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:16.690 14:40:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:36:16.690 14:40:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:36:16.690 14:40:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:36:16.951 14:40:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:16.951 14:40:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:36:17.210 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:17.210 Waiting for block devices as requested 00:36:17.210 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:36:17.210 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:36:17.781 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:36:17.781 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:17.782 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:36:17.782 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:36:17.782 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:17.782 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:36:17.782 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:36:17.782 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:36:17.782 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:36:17.782 No valid GPT data, bailing 00:36:17.782 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:17.782 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:36:17.782 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:36:17.782 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:36:17.782 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:36:17.782 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:36:17.782 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:36:17.782 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:36:17.782 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:36:17.782 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:36:17.782 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:36:17.782 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:36:17.782 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:36:17.782 No valid GPT data, bailing 00:36:17.782 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:36:17.782 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:36:17.782 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:36:17.782 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:36:17.782 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:36:17.782 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:36:17.782 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:36:17.782 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:36:17.782 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:36:17.782 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:36:17.782 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:36:17.782 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:36:17.782 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:36:17.782 No valid GPT data, bailing 00:36:17.782 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:36:17.782 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:36:17.782 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:36:17.782 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:36:17.782 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:36:17.782 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:36:17.782 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:36:17.782 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:36:17.782 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:36:17.782 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:36:17.782 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:36:17.782 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:36:17.782 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:36:17.782 No valid GPT data, bailing 00:36:17.782 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:36:17.782 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:36:17.782 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:36:17.782 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:36:17.782 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:36:17.782 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:17.782 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:17.782 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:18.041 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:36:18.041 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:36:18.041 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:36:18.041 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:36:18.041 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:36:18.041 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:36:18.041 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:36:18.041 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:36:18.041 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:18.041 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 --hostid=9ca3beaa-4765-4780-96a0-c04fd97e7435 -a 10.0.0.1 -t tcp -s 4420 00:36:18.041 00:36:18.041 Discovery Log Number of Records 2, Generation counter 2 00:36:18.041 =====Discovery Log Entry 0====== 00:36:18.041 trtype: tcp 00:36:18.041 adrfam: ipv4 00:36:18.041 subtype: current discovery subsystem 00:36:18.041 treq: not specified, sq flow control disable supported 00:36:18.041 portid: 1 00:36:18.041 trsvcid: 4420 00:36:18.041 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:18.041 traddr: 10.0.0.1 00:36:18.041 eflags: none 00:36:18.041 sectype: none 00:36:18.041 =====Discovery Log Entry 1====== 00:36:18.041 trtype: tcp 00:36:18.041 adrfam: ipv4 00:36:18.041 subtype: nvme subsystem 00:36:18.041 treq: not specified, sq flow control disable supported 00:36:18.041 portid: 1 00:36:18.041 trsvcid: 4420 00:36:18.041 subnqn: nqn.2016-06.io.spdk:testnqn 00:36:18.041 traddr: 10.0.0.1 00:36:18.041 eflags: none 00:36:18.041 sectype: none 00:36:18.041 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:36:18.041 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:18.041 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:18.041 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:36:18.041 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:18.041 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:18.041 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:18.042 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:18.042 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:18.042 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:18.042 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:18.042 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:18.042 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:18.042 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:18.042 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:36:18.042 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:18.042 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:36:18.042 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:18.042 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:18.042 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:18.042 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:21.325 Initializing NVMe Controllers 00:36:21.326 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:21.326 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:21.326 Initialization complete. Launching workers. 00:36:21.326 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 36386, failed: 0 00:36:21.326 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36386, failed to submit 0 00:36:21.326 success 0, unsuccessful 36386, failed 0 00:36:21.326 14:40:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:21.326 14:40:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:24.627 Initializing NVMe Controllers 00:36:24.627 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:24.627 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:24.627 Initialization complete. Launching workers. 00:36:24.627 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 53601, failed: 0 00:36:24.627 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 22102, failed to submit 31499 00:36:24.627 success 0, unsuccessful 22102, failed 0 00:36:24.627 14:40:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:24.627 14:40:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:27.942 Initializing NVMe Controllers 00:36:27.942 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:27.942 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:27.942 Initialization complete. Launching workers. 00:36:27.942 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 72156, failed: 0 00:36:27.942 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 18020, failed to submit 54136 00:36:27.942 success 0, unsuccessful 18020, failed 0 00:36:27.942 14:40:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:36:27.942 14:40:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:36:27.942 14:40:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:36:27.942 14:40:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:27.942 14:40:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:27.942 14:40:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:27.942 14:40:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:27.942 14:40:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:36:27.942 14:40:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:36:27.942 14:40:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:36:28.535 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:29.920 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:36:29.920 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:36:29.920 ************************************ 00:36:29.920 END TEST kernel_target_abort 00:36:29.920 ************************************ 00:36:29.920 00:36:29.920 real 0m13.124s 00:36:29.920 user 0m6.655s 00:36:29.920 sys 0m3.585s 00:36:29.920 14:40:57 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:29.920 14:40:57 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:29.920 14:40:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:36:29.920 14:40:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:36:29.920 14:40:57 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:29.920 14:40:57 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:36:29.920 14:40:57 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:29.920 14:40:57 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:36:29.920 14:40:57 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:29.920 14:40:57 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:29.920 rmmod nvme_tcp 00:36:29.920 rmmod nvme_fabrics 00:36:30.182 rmmod nvme_keyring 00:36:30.182 14:40:57 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:30.182 14:40:57 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:36:30.182 14:40:57 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:36:30.182 14:40:57 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 119494 ']' 00:36:30.182 14:40:57 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 119494 00:36:30.182 Process with pid 119494 is not found 00:36:30.182 14:40:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # '[' -z 119494 ']' 00:36:30.182 14:40:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@956 -- # kill -0 119494 00:36:30.182 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (119494) - No such process 00:36:30.182 14:40:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@979 -- # echo 'Process with pid 119494 is not found' 00:36:30.182 14:40:57 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:36:30.182 14:40:57 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:36:30.444 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:30.444 Waiting for block devices as requested 00:36:30.444 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:36:30.444 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:36:30.705 14:40:58 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:30.705 14:40:58 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:30.705 14:40:58 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:36:30.705 14:40:58 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:30.705 14:40:58 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:36:30.705 14:40:58 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:36:30.705 14:40:58 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:30.705 14:40:58 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:36:30.705 14:40:58 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:36:30.705 14:40:58 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:36:30.705 14:40:58 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:36:30.705 14:40:58 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:36:30.705 14:40:58 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:36:30.705 14:40:58 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:36:30.705 14:40:58 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:36:30.705 14:40:58 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:36:30.706 14:40:58 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:36:30.706 14:40:58 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:36:30.706 14:40:58 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:36:30.706 14:40:58 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:36:30.706 14:40:58 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:36:30.706 14:40:58 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:36:30.706 14:40:58 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:30.706 14:40:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:30.706 14:40:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:30.968 14:40:58 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:36:30.968 00:36:30.968 real 0m27.757s 00:36:30.968 user 0m52.358s 00:36:30.968 sys 0m6.252s 00:36:30.968 ************************************ 00:36:30.968 END TEST nvmf_abort_qd_sizes 00:36:30.968 ************************************ 00:36:30.968 14:40:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:30.968 14:40:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:30.968 14:40:58 -- spdk/autotest.sh@288 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:36:30.968 14:40:58 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:36:30.968 14:40:58 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:30.968 14:40:58 -- common/autotest_common.sh@10 -- # set +x 00:36:30.968 ************************************ 00:36:30.968 START TEST keyring_file 00:36:30.968 ************************************ 00:36:30.968 14:40:58 keyring_file -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:36:30.968 * Looking for test storage... 00:36:30.968 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:36:30.968 14:40:58 keyring_file -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:30.968 14:40:58 keyring_file -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:30.968 14:40:58 keyring_file -- common/autotest_common.sh@1691 -- # lcov --version 00:36:30.968 14:40:58 keyring_file -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:30.968 14:40:58 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:30.968 14:40:58 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:30.968 14:40:58 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:30.968 14:40:58 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:36:30.968 14:40:58 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:36:30.968 14:40:58 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:36:30.968 14:40:58 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:36:30.968 14:40:58 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:36:30.968 14:40:58 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:36:30.968 14:40:58 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:36:30.968 14:40:58 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:30.968 14:40:58 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:36:30.968 14:40:58 keyring_file -- scripts/common.sh@345 -- # : 1 00:36:30.968 14:40:58 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:30.968 14:40:58 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:30.968 14:40:58 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:36:30.968 14:40:58 keyring_file -- scripts/common.sh@353 -- # local d=1 00:36:30.968 14:40:58 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:30.968 14:40:58 keyring_file -- scripts/common.sh@355 -- # echo 1 00:36:30.968 14:40:58 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:36:30.968 14:40:58 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:36:30.968 14:40:58 keyring_file -- scripts/common.sh@353 -- # local d=2 00:36:30.968 14:40:58 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:30.968 14:40:58 keyring_file -- scripts/common.sh@355 -- # echo 2 00:36:30.968 14:40:58 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:36:30.968 14:40:58 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:30.968 14:40:58 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:30.968 14:40:58 keyring_file -- scripts/common.sh@368 -- # return 0 00:36:30.968 14:40:58 keyring_file -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:30.968 14:40:58 keyring_file -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:30.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:30.968 --rc genhtml_branch_coverage=1 00:36:30.968 --rc genhtml_function_coverage=1 00:36:30.968 --rc genhtml_legend=1 00:36:30.968 --rc geninfo_all_blocks=1 00:36:30.968 --rc geninfo_unexecuted_blocks=1 00:36:30.968 00:36:30.968 ' 00:36:30.968 14:40:58 keyring_file -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:30.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:30.968 --rc genhtml_branch_coverage=1 00:36:30.968 --rc genhtml_function_coverage=1 00:36:30.968 --rc genhtml_legend=1 00:36:30.968 --rc geninfo_all_blocks=1 00:36:30.968 --rc geninfo_unexecuted_blocks=1 00:36:30.968 00:36:30.968 ' 00:36:30.968 14:40:58 keyring_file -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:30.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:30.968 --rc genhtml_branch_coverage=1 00:36:30.968 --rc genhtml_function_coverage=1 00:36:30.968 --rc genhtml_legend=1 00:36:30.968 --rc geninfo_all_blocks=1 00:36:30.968 --rc geninfo_unexecuted_blocks=1 00:36:30.968 00:36:30.968 ' 00:36:30.968 14:40:58 keyring_file -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:30.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:30.968 --rc genhtml_branch_coverage=1 00:36:30.968 --rc genhtml_function_coverage=1 00:36:30.968 --rc genhtml_legend=1 00:36:30.968 --rc geninfo_all_blocks=1 00:36:30.968 --rc geninfo_unexecuted_blocks=1 00:36:30.968 00:36:30.968 ' 00:36:30.968 14:40:58 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:36:30.968 14:40:58 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:36:30.968 14:40:58 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:36:30.968 14:40:58 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:30.968 14:40:58 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:30.968 14:40:58 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:30.968 14:40:58 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:30.968 14:40:58 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:30.968 14:40:58 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:30.968 14:40:58 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:30.968 14:40:58 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:30.968 14:40:58 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:30.968 14:40:58 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:31.229 14:40:58 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:36:31.229 14:40:58 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:36:31.229 14:40:58 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:31.229 14:40:58 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:31.229 14:40:58 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:36:31.229 14:40:58 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:31.229 14:40:58 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:36:31.229 14:40:58 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:36:31.229 14:40:58 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:31.229 14:40:58 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:31.230 14:40:58 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:31.230 14:40:58 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:31.230 14:40:58 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:31.230 14:40:58 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:31.230 14:40:58 keyring_file -- paths/export.sh@5 -- # export PATH 00:36:31.230 14:40:58 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:31.230 14:40:58 keyring_file -- nvmf/common.sh@51 -- # : 0 00:36:31.230 14:40:58 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:31.230 14:40:58 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:31.230 14:40:58 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:31.230 14:40:58 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:31.230 14:40:58 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:31.230 14:40:58 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:31.230 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:31.230 14:40:58 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:31.230 14:40:58 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:31.230 14:40:58 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:31.230 14:40:58 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:31.230 14:40:58 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:31.230 14:40:58 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:31.230 14:40:58 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:36:31.230 14:40:58 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:36:31.230 14:40:58 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:36:31.230 14:40:58 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:31.230 14:40:58 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:31.230 14:40:58 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:31.230 14:40:58 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:31.230 14:40:58 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:31.230 14:40:58 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:31.230 14:40:58 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.YXvWqcFYjR 00:36:31.230 14:40:58 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:31.230 14:40:58 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:31.230 14:40:58 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:36:31.230 14:40:58 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:31.230 14:40:58 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:36:31.230 14:40:58 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:36:31.230 14:40:58 keyring_file -- nvmf/common.sh@733 -- # python - 00:36:31.230 14:40:58 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.YXvWqcFYjR 00:36:31.230 14:40:58 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.YXvWqcFYjR 00:36:31.230 14:40:58 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.YXvWqcFYjR 00:36:31.230 14:40:58 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:36:31.230 14:40:58 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:31.230 14:40:58 keyring_file -- keyring/common.sh@17 -- # name=key1 00:36:31.230 14:40:58 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:31.230 14:40:58 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:31.230 14:40:58 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:31.230 14:40:58 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.GH78l7JYNh 00:36:31.230 14:40:58 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:31.230 14:40:58 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:31.230 14:40:58 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:36:31.230 14:40:58 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:31.230 14:40:58 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:36:31.230 14:40:58 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:36:31.230 14:40:58 keyring_file -- nvmf/common.sh@733 -- # python - 00:36:31.230 14:40:58 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.GH78l7JYNh 00:36:31.230 14:40:58 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.GH78l7JYNh 00:36:31.230 14:40:58 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.GH78l7JYNh 00:36:31.230 14:40:58 keyring_file -- keyring/file.sh@30 -- # tgtpid=120571 00:36:31.230 14:40:58 keyring_file -- keyring/file.sh@32 -- # waitforlisten 120571 00:36:31.230 14:40:58 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 120571 ']' 00:36:31.230 14:40:58 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:31.230 14:40:58 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:31.230 14:40:58 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:31.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:31.230 14:40:58 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:31.230 14:40:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:31.230 14:40:58 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:36:31.230 [2024-11-06 14:40:58.823921] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:36:31.230 [2024-11-06 14:40:58.824214] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120571 ] 00:36:31.491 [2024-11-06 14:40:58.988973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:31.491 [2024-11-06 14:40:59.107488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:32.433 14:40:59 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:32.433 14:40:59 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:36:32.433 14:40:59 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:36:32.433 14:40:59 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.433 14:40:59 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:32.433 [2024-11-06 14:40:59.805673] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:32.433 null0 00:36:32.433 [2024-11-06 14:40:59.837666] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:32.433 [2024-11-06 14:40:59.837903] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:32.434 14:40:59 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.434 14:40:59 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:32.434 14:40:59 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:36:32.434 14:40:59 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:32.434 14:40:59 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:36:32.434 14:40:59 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:32.434 14:40:59 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:36:32.434 14:40:59 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:32.434 14:40:59 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:32.434 14:40:59 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.434 14:40:59 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:32.434 [2024-11-06 14:40:59.865637] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:36:32.434 2024/11/06 14:40:59 error on JSON-RPC call, method: nvmf_subsystem_add_listener, params: map[listen_address:map[traddr:127.0.0.1 trsvcid:4420 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode0 secure_channel:%!s(bool=false)], err: error received for nvmf_subsystem_add_listener method, err: Code=-32602 Msg=Invalid parameters 00:36:32.434 request: 00:36:32.434 { 00:36:32.434 "method": "nvmf_subsystem_add_listener", 00:36:32.434 "params": { 00:36:32.434 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:36:32.434 "secure_channel": false, 00:36:32.434 "listen_address": { 00:36:32.434 "trtype": "tcp", 00:36:32.434 "traddr": "127.0.0.1", 00:36:32.434 "trsvcid": "4420" 00:36:32.434 } 00:36:32.434 } 00:36:32.434 } 00:36:32.434 Got JSON-RPC error response 00:36:32.434 GoRPCClient: error on JSON-RPC call 00:36:32.434 14:40:59 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:36:32.434 14:40:59 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:36:32.434 14:40:59 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:32.434 14:40:59 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:32.434 14:40:59 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:32.434 14:40:59 keyring_file -- keyring/file.sh@47 -- # bperfpid=120602 00:36:32.434 14:40:59 keyring_file -- keyring/file.sh@49 -- # waitforlisten 120602 /var/tmp/bperf.sock 00:36:32.434 14:40:59 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 120602 ']' 00:36:32.434 14:40:59 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:32.434 14:40:59 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:36:32.434 14:40:59 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:32.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:32.434 14:40:59 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:32.434 14:40:59 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:32.434 14:40:59 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:32.434 [2024-11-06 14:40:59.952099] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:36:32.434 [2024-11-06 14:40:59.952241] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120602 ] 00:36:32.695 [2024-11-06 14:41:00.112657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:32.695 [2024-11-06 14:41:00.222591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:33.268 14:41:00 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:33.268 14:41:00 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:36:33.268 14:41:00 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.YXvWqcFYjR 00:36:33.268 14:41:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.YXvWqcFYjR 00:36:33.529 14:41:01 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.GH78l7JYNh 00:36:33.529 14:41:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.GH78l7JYNh 00:36:33.791 14:41:01 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:36:33.791 14:41:01 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:36:33.791 14:41:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:33.791 14:41:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:33.791 14:41:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:34.052 14:41:01 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.YXvWqcFYjR == \/\t\m\p\/\t\m\p\.\Y\X\v\W\q\c\F\Y\j\R ]] 00:36:34.052 14:41:01 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:36:34.052 14:41:01 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:36:34.052 14:41:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:34.052 14:41:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:34.052 14:41:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:34.313 14:41:01 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.GH78l7JYNh == \/\t\m\p\/\t\m\p\.\G\H\7\8\l\7\J\Y\N\h ]] 00:36:34.313 14:41:01 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:36:34.313 14:41:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:34.313 14:41:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:34.313 14:41:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:34.313 14:41:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:34.313 14:41:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:34.574 14:41:01 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:36:34.574 14:41:01 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:36:34.574 14:41:01 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:34.574 14:41:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:34.574 14:41:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:34.574 14:41:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:34.574 14:41:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:34.574 14:41:02 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:36:34.574 14:41:02 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:34.574 14:41:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:34.833 [2024-11-06 14:41:02.372748] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:34.833 nvme0n1 00:36:35.091 14:41:02 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:36:35.091 14:41:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:35.091 14:41:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:35.091 14:41:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:35.091 14:41:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:35.091 14:41:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:35.091 14:41:02 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:36:35.091 14:41:02 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:36:35.091 14:41:02 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:35.091 14:41:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:35.091 14:41:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:35.091 14:41:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:35.091 14:41:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:35.349 14:41:02 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:36:35.349 14:41:02 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:35.608 Running I/O for 1 seconds... 00:36:36.544 16665.00 IOPS, 65.10 MiB/s 00:36:36.544 Latency(us) 00:36:36.544 [2024-11-06T14:41:04.179Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:36.544 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:36:36.544 nvme0n1 : 1.01 16709.83 65.27 0.00 0.00 7643.89 3377.62 12199.78 00:36:36.544 [2024-11-06T14:41:04.179Z] =================================================================================================================== 00:36:36.544 [2024-11-06T14:41:04.179Z] Total : 16709.83 65.27 0.00 0.00 7643.89 3377.62 12199.78 00:36:36.544 { 00:36:36.544 "results": [ 00:36:36.544 { 00:36:36.544 "job": "nvme0n1", 00:36:36.544 "core_mask": "0x2", 00:36:36.544 "workload": "randrw", 00:36:36.544 "percentage": 50, 00:36:36.544 "status": "finished", 00:36:36.544 "queue_depth": 128, 00:36:36.544 "io_size": 4096, 00:36:36.544 "runtime": 1.005037, 00:36:36.544 "iops": 16709.832573328145, 00:36:36.544 "mibps": 65.27278348956307, 00:36:36.544 "io_failed": 0, 00:36:36.544 "io_timeout": 0, 00:36:36.544 "avg_latency_us": 7643.892980093624, 00:36:36.544 "min_latency_us": 3377.6246153846155, 00:36:36.544 "max_latency_us": 12199.77846153846 00:36:36.544 } 00:36:36.544 ], 00:36:36.544 "core_count": 1 00:36:36.544 } 00:36:36.544 14:41:04 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:36.544 14:41:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:36.803 14:41:04 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:36:36.803 14:41:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:36.803 14:41:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:36.803 14:41:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:36.803 14:41:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:36.803 14:41:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:37.062 14:41:04 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:36:37.062 14:41:04 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:36:37.062 14:41:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:37.062 14:41:04 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:37.062 14:41:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:37.062 14:41:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:37.062 14:41:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:37.062 14:41:04 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:36:37.062 14:41:04 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:37.062 14:41:04 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:36:37.062 14:41:04 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:37.062 14:41:04 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:36:37.062 14:41:04 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:37.062 14:41:04 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:36:37.062 14:41:04 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:37.062 14:41:04 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:37.062 14:41:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:37.321 [2024-11-06 14:41:04.842064] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:37.321 [2024-11-06 14:41:04.842914] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000030280 (107): Transport endpoint is not connected 00:36:37.321 [2024-11-06 14:41:04.843886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000030280 (9): Bad file descriptor 00:36:37.321 [2024-11-06 14:41:04.844880] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:36:37.321 [2024-11-06 14:41:04.844976] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:37.321 [2024-11-06 14:41:04.845018] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:36:37.321 [2024-11-06 14:41:04.845057] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:36:37.321 2024/11/06 14:41:04 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:36:37.321 request: 00:36:37.321 { 00:36:37.321 "method": "bdev_nvme_attach_controller", 00:36:37.321 "params": { 00:36:37.321 "name": "nvme0", 00:36:37.321 "trtype": "tcp", 00:36:37.321 "traddr": "127.0.0.1", 00:36:37.321 "adrfam": "ipv4", 00:36:37.321 "trsvcid": "4420", 00:36:37.321 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:37.321 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:37.321 "prchk_reftag": false, 00:36:37.321 "prchk_guard": false, 00:36:37.321 "hdgst": false, 00:36:37.321 "ddgst": false, 00:36:37.321 "psk": "key1", 00:36:37.321 "allow_unrecognized_csi": false 00:36:37.321 } 00:36:37.321 } 00:36:37.321 Got JSON-RPC error response 00:36:37.321 GoRPCClient: error on JSON-RPC call 00:36:37.321 14:41:04 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:36:37.321 14:41:04 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:37.321 14:41:04 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:37.321 14:41:04 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:37.321 14:41:04 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:36:37.321 14:41:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:37.321 14:41:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:37.321 14:41:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:37.321 14:41:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:37.321 14:41:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:37.580 14:41:05 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:36:37.580 14:41:05 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:36:37.580 14:41:05 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:37.580 14:41:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:37.580 14:41:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:37.580 14:41:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:37.580 14:41:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:37.839 14:41:05 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:36:37.839 14:41:05 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:36:37.839 14:41:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:38.097 14:41:05 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:36:38.097 14:41:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:36:38.097 14:41:05 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:36:38.097 14:41:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:38.097 14:41:05 keyring_file -- keyring/file.sh@78 -- # jq length 00:36:38.356 14:41:05 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:36:38.356 14:41:05 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.YXvWqcFYjR 00:36:38.356 14:41:05 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.YXvWqcFYjR 00:36:38.356 14:41:05 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:36:38.356 14:41:05 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.YXvWqcFYjR 00:36:38.356 14:41:05 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:36:38.356 14:41:05 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:38.356 14:41:05 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:36:38.356 14:41:05 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:38.356 14:41:05 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.YXvWqcFYjR 00:36:38.356 14:41:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.YXvWqcFYjR 00:36:38.615 [2024-11-06 14:41:06.096923] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.YXvWqcFYjR': 0100660 00:36:38.615 [2024-11-06 14:41:06.097267] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:36:38.615 2024/11/06 14:41:06 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.YXvWqcFYjR], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:36:38.615 request: 00:36:38.615 { 00:36:38.615 "method": "keyring_file_add_key", 00:36:38.615 "params": { 00:36:38.615 "name": "key0", 00:36:38.615 "path": "/tmp/tmp.YXvWqcFYjR" 00:36:38.615 } 00:36:38.615 } 00:36:38.615 Got JSON-RPC error response 00:36:38.615 GoRPCClient: error on JSON-RPC call 00:36:38.615 14:41:06 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:36:38.615 14:41:06 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:38.615 14:41:06 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:38.615 14:41:06 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:38.615 14:41:06 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.YXvWqcFYjR 00:36:38.615 14:41:06 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.YXvWqcFYjR 00:36:38.615 14:41:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.YXvWqcFYjR 00:36:38.874 14:41:06 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.YXvWqcFYjR 00:36:38.874 14:41:06 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:36:38.874 14:41:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:38.874 14:41:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:38.874 14:41:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:38.874 14:41:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:38.874 14:41:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:39.134 14:41:06 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:36:39.134 14:41:06 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:39.134 14:41:06 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:36:39.134 14:41:06 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:39.134 14:41:06 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:36:39.134 14:41:06 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:39.134 14:41:06 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:36:39.134 14:41:06 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:39.134 14:41:06 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:39.134 14:41:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:39.134 [2024-11-06 14:41:06.717088] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.YXvWqcFYjR': No such file or directory 00:36:39.134 [2024-11-06 14:41:06.717120] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:36:39.134 [2024-11-06 14:41:06.717138] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:36:39.134 [2024-11-06 14:41:06.717147] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:36:39.134 [2024-11-06 14:41:06.717156] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:39.134 [2024-11-06 14:41:06.717165] bdev_nvme.c:6669:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:36:39.134 2024/11/06 14:41:06 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-19 Msg=No such device 00:36:39.134 request: 00:36:39.134 { 00:36:39.134 "method": "bdev_nvme_attach_controller", 00:36:39.134 "params": { 00:36:39.134 "name": "nvme0", 00:36:39.134 "trtype": "tcp", 00:36:39.134 "traddr": "127.0.0.1", 00:36:39.134 "adrfam": "ipv4", 00:36:39.134 "trsvcid": "4420", 00:36:39.134 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:39.134 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:39.134 "prchk_reftag": false, 00:36:39.134 "prchk_guard": false, 00:36:39.134 "hdgst": false, 00:36:39.134 "ddgst": false, 00:36:39.134 "psk": "key0", 00:36:39.134 "allow_unrecognized_csi": false 00:36:39.134 } 00:36:39.134 } 00:36:39.134 Got JSON-RPC error response 00:36:39.134 GoRPCClient: error on JSON-RPC call 00:36:39.134 14:41:06 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:36:39.134 14:41:06 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:39.134 14:41:06 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:39.134 14:41:06 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:39.134 14:41:06 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:36:39.134 14:41:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:39.393 14:41:06 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:39.393 14:41:06 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:39.393 14:41:06 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:39.393 14:41:06 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:39.393 14:41:06 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:39.393 14:41:06 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:39.393 14:41:06 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.CNh1CWMKWL 00:36:39.393 14:41:06 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:39.393 14:41:06 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:39.393 14:41:06 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:36:39.393 14:41:06 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:39.393 14:41:06 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:36:39.393 14:41:06 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:36:39.393 14:41:06 keyring_file -- nvmf/common.sh@733 -- # python - 00:36:39.393 14:41:06 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.CNh1CWMKWL 00:36:39.393 14:41:06 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.CNh1CWMKWL 00:36:39.393 14:41:06 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.CNh1CWMKWL 00:36:39.393 14:41:06 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.CNh1CWMKWL 00:36:39.393 14:41:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.CNh1CWMKWL 00:36:39.651 14:41:07 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:39.651 14:41:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:39.910 nvme0n1 00:36:39.910 14:41:07 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:36:39.910 14:41:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:39.911 14:41:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:39.911 14:41:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:39.911 14:41:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:39.911 14:41:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:40.172 14:41:07 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:36:40.172 14:41:07 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:36:40.172 14:41:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:40.430 14:41:07 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:36:40.430 14:41:07 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:36:40.430 14:41:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:40.430 14:41:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:40.430 14:41:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:40.430 14:41:08 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:36:40.430 14:41:08 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:36:40.430 14:41:08 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:40.430 14:41:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:40.689 14:41:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:40.689 14:41:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:40.689 14:41:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:40.689 14:41:08 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:36:40.689 14:41:08 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:40.689 14:41:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:40.949 14:41:08 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:36:40.949 14:41:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:40.949 14:41:08 keyring_file -- keyring/file.sh@105 -- # jq length 00:36:41.208 14:41:08 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:36:41.208 14:41:08 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.CNh1CWMKWL 00:36:41.208 14:41:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.CNh1CWMKWL 00:36:41.467 14:41:08 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.GH78l7JYNh 00:36:41.467 14:41:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.GH78l7JYNh 00:36:41.725 14:41:09 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:41.725 14:41:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:41.984 nvme0n1 00:36:41.984 14:41:09 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:36:41.984 14:41:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:36:42.243 14:41:09 keyring_file -- keyring/file.sh@113 -- # config='{ 00:36:42.243 "subsystems": [ 00:36:42.243 { 00:36:42.243 "subsystem": "keyring", 00:36:42.243 "config": [ 00:36:42.243 { 00:36:42.243 "method": "keyring_file_add_key", 00:36:42.243 "params": { 00:36:42.243 "name": "key0", 00:36:42.243 "path": "/tmp/tmp.CNh1CWMKWL" 00:36:42.243 } 00:36:42.243 }, 00:36:42.243 { 00:36:42.243 "method": "keyring_file_add_key", 00:36:42.243 "params": { 00:36:42.243 "name": "key1", 00:36:42.243 "path": "/tmp/tmp.GH78l7JYNh" 00:36:42.243 } 00:36:42.243 } 00:36:42.243 ] 00:36:42.243 }, 00:36:42.243 { 00:36:42.243 "subsystem": "iobuf", 00:36:42.243 "config": [ 00:36:42.243 { 00:36:42.243 "method": "iobuf_set_options", 00:36:42.243 "params": { 00:36:42.243 "enable_numa": false, 00:36:42.243 "large_bufsize": 135168, 00:36:42.243 "large_pool_count": 1024, 00:36:42.243 "small_bufsize": 8192, 00:36:42.243 "small_pool_count": 8192 00:36:42.243 } 00:36:42.243 } 00:36:42.243 ] 00:36:42.243 }, 00:36:42.243 { 00:36:42.243 "subsystem": "sock", 00:36:42.243 "config": [ 00:36:42.243 { 00:36:42.243 "method": "sock_set_default_impl", 00:36:42.243 "params": { 00:36:42.243 "impl_name": "posix" 00:36:42.243 } 00:36:42.243 }, 00:36:42.243 { 00:36:42.243 "method": "sock_impl_set_options", 00:36:42.243 "params": { 00:36:42.243 "enable_ktls": false, 00:36:42.243 "enable_placement_id": 0, 00:36:42.243 "enable_quickack": false, 00:36:42.243 "enable_recv_pipe": true, 00:36:42.243 "enable_zerocopy_send_client": false, 00:36:42.243 "enable_zerocopy_send_server": true, 00:36:42.243 "impl_name": "ssl", 00:36:42.243 "recv_buf_size": 4096, 00:36:42.243 "send_buf_size": 4096, 00:36:42.243 "tls_version": 0, 00:36:42.243 "zerocopy_threshold": 0 00:36:42.243 } 00:36:42.243 }, 00:36:42.243 { 00:36:42.243 "method": "sock_impl_set_options", 00:36:42.243 "params": { 00:36:42.243 "enable_ktls": false, 00:36:42.243 "enable_placement_id": 0, 00:36:42.243 "enable_quickack": false, 00:36:42.243 "enable_recv_pipe": true, 00:36:42.243 "enable_zerocopy_send_client": false, 00:36:42.243 "enable_zerocopy_send_server": true, 00:36:42.243 "impl_name": "posix", 00:36:42.243 "recv_buf_size": 2097152, 00:36:42.243 "send_buf_size": 2097152, 00:36:42.243 "tls_version": 0, 00:36:42.243 "zerocopy_threshold": 0 00:36:42.243 } 00:36:42.243 } 00:36:42.243 ] 00:36:42.243 }, 00:36:42.243 { 00:36:42.243 "subsystem": "vmd", 00:36:42.243 "config": [] 00:36:42.243 }, 00:36:42.243 { 00:36:42.243 "subsystem": "accel", 00:36:42.243 "config": [ 00:36:42.243 { 00:36:42.243 "method": "accel_set_options", 00:36:42.243 "params": { 00:36:42.243 "buf_count": 2048, 00:36:42.243 "large_cache_size": 16, 00:36:42.243 "sequence_count": 2048, 00:36:42.243 "small_cache_size": 128, 00:36:42.243 "task_count": 2048 00:36:42.243 } 00:36:42.243 } 00:36:42.243 ] 00:36:42.243 }, 00:36:42.243 { 00:36:42.243 "subsystem": "bdev", 00:36:42.243 "config": [ 00:36:42.243 { 00:36:42.243 "method": "bdev_set_options", 00:36:42.243 "params": { 00:36:42.243 "bdev_auto_examine": true, 00:36:42.243 "bdev_io_cache_size": 256, 00:36:42.243 "bdev_io_pool_size": 65535, 00:36:42.243 "iobuf_large_cache_size": 16, 00:36:42.243 "iobuf_small_cache_size": 128 00:36:42.243 } 00:36:42.243 }, 00:36:42.243 { 00:36:42.243 "method": "bdev_raid_set_options", 00:36:42.243 "params": { 00:36:42.243 "process_max_bandwidth_mb_sec": 0, 00:36:42.243 "process_window_size_kb": 1024 00:36:42.243 } 00:36:42.243 }, 00:36:42.243 { 00:36:42.243 "method": "bdev_iscsi_set_options", 00:36:42.243 "params": { 00:36:42.243 "timeout_sec": 30 00:36:42.243 } 00:36:42.243 }, 00:36:42.243 { 00:36:42.243 "method": "bdev_nvme_set_options", 00:36:42.243 "params": { 00:36:42.243 "action_on_timeout": "none", 00:36:42.243 "allow_accel_sequence": false, 00:36:42.243 "arbitration_burst": 0, 00:36:42.244 "bdev_retry_count": 3, 00:36:42.244 "ctrlr_loss_timeout_sec": 0, 00:36:42.244 "delay_cmd_submit": true, 00:36:42.244 "dhchap_dhgroups": [ 00:36:42.244 "null", 00:36:42.244 "ffdhe2048", 00:36:42.244 "ffdhe3072", 00:36:42.244 "ffdhe4096", 00:36:42.244 "ffdhe6144", 00:36:42.244 "ffdhe8192" 00:36:42.244 ], 00:36:42.244 "dhchap_digests": [ 00:36:42.244 "sha256", 00:36:42.244 "sha384", 00:36:42.244 "sha512" 00:36:42.244 ], 00:36:42.244 "disable_auto_failback": false, 00:36:42.244 "fast_io_fail_timeout_sec": 0, 00:36:42.244 "generate_uuids": false, 00:36:42.244 "high_priority_weight": 0, 00:36:42.244 "io_path_stat": false, 00:36:42.244 "io_queue_requests": 512, 00:36:42.244 "keep_alive_timeout_ms": 10000, 00:36:42.244 "low_priority_weight": 0, 00:36:42.244 "medium_priority_weight": 0, 00:36:42.244 "nvme_adminq_poll_period_us": 10000, 00:36:42.244 "nvme_error_stat": false, 00:36:42.244 "nvme_ioq_poll_period_us": 0, 00:36:42.244 "rdma_cm_event_timeout_ms": 0, 00:36:42.244 "rdma_max_cq_size": 0, 00:36:42.244 "rdma_srq_size": 0, 00:36:42.244 "reconnect_delay_sec": 0, 00:36:42.244 "timeout_admin_us": 0, 00:36:42.244 "timeout_us": 0, 00:36:42.244 "transport_ack_timeout": 0, 00:36:42.244 "transport_retry_count": 4, 00:36:42.244 "transport_tos": 0 00:36:42.244 } 00:36:42.244 }, 00:36:42.244 { 00:36:42.244 "method": "bdev_nvme_attach_controller", 00:36:42.244 "params": { 00:36:42.244 "adrfam": "IPv4", 00:36:42.244 "ctrlr_loss_timeout_sec": 0, 00:36:42.244 "ddgst": false, 00:36:42.244 "fast_io_fail_timeout_sec": 0, 00:36:42.244 "hdgst": false, 00:36:42.244 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:42.244 "multipath": "multipath", 00:36:42.244 "name": "nvme0", 00:36:42.244 "prchk_guard": false, 00:36:42.244 "prchk_reftag": false, 00:36:42.244 "psk": "key0", 00:36:42.244 "reconnect_delay_sec": 0, 00:36:42.244 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:42.244 "traddr": "127.0.0.1", 00:36:42.244 "trsvcid": "4420", 00:36:42.244 "trtype": "TCP" 00:36:42.244 } 00:36:42.244 }, 00:36:42.244 { 00:36:42.244 "method": "bdev_nvme_set_hotplug", 00:36:42.244 "params": { 00:36:42.244 "enable": false, 00:36:42.244 "period_us": 100000 00:36:42.244 } 00:36:42.244 }, 00:36:42.244 { 00:36:42.244 "method": "bdev_wait_for_examine" 00:36:42.244 } 00:36:42.244 ] 00:36:42.244 }, 00:36:42.244 { 00:36:42.244 "subsystem": "nbd", 00:36:42.244 "config": [] 00:36:42.244 } 00:36:42.244 ] 00:36:42.244 }' 00:36:42.244 14:41:09 keyring_file -- keyring/file.sh@115 -- # killprocess 120602 00:36:42.244 14:41:09 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 120602 ']' 00:36:42.244 14:41:09 keyring_file -- common/autotest_common.sh@956 -- # kill -0 120602 00:36:42.244 14:41:09 keyring_file -- common/autotest_common.sh@957 -- # uname 00:36:42.244 14:41:09 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:42.244 14:41:09 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 120602 00:36:42.244 14:41:09 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:36:42.244 killing process with pid 120602 00:36:42.244 14:41:09 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:36:42.244 14:41:09 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 120602' 00:36:42.244 14:41:09 keyring_file -- common/autotest_common.sh@971 -- # kill 120602 00:36:42.244 Received shutdown signal, test time was about 1.000000 seconds 00:36:42.244 00:36:42.244 Latency(us) 00:36:42.244 [2024-11-06T14:41:09.879Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:42.244 [2024-11-06T14:41:09.879Z] =================================================================================================================== 00:36:42.244 [2024-11-06T14:41:09.879Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:42.244 14:41:09 keyring_file -- common/autotest_common.sh@976 -- # wait 120602 00:36:42.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:42.811 14:41:10 keyring_file -- keyring/file.sh@118 -- # bperfpid=121064 00:36:42.811 14:41:10 keyring_file -- keyring/file.sh@120 -- # waitforlisten 121064 /var/tmp/bperf.sock 00:36:42.811 14:41:10 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 121064 ']' 00:36:42.811 14:41:10 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:42.811 14:41:10 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:36:42.811 "subsystems": [ 00:36:42.811 { 00:36:42.811 "subsystem": "keyring", 00:36:42.811 "config": [ 00:36:42.811 { 00:36:42.811 "method": "keyring_file_add_key", 00:36:42.811 "params": { 00:36:42.811 "name": "key0", 00:36:42.811 "path": "/tmp/tmp.CNh1CWMKWL" 00:36:42.811 } 00:36:42.811 }, 00:36:42.811 { 00:36:42.811 "method": "keyring_file_add_key", 00:36:42.811 "params": { 00:36:42.811 "name": "key1", 00:36:42.811 "path": "/tmp/tmp.GH78l7JYNh" 00:36:42.811 } 00:36:42.811 } 00:36:42.811 ] 00:36:42.811 }, 00:36:42.811 { 00:36:42.811 "subsystem": "iobuf", 00:36:42.811 "config": [ 00:36:42.811 { 00:36:42.811 "method": "iobuf_set_options", 00:36:42.811 "params": { 00:36:42.811 "enable_numa": false, 00:36:42.811 "large_bufsize": 135168, 00:36:42.811 "large_pool_count": 1024, 00:36:42.811 "small_bufsize": 8192, 00:36:42.811 "small_pool_count": 8192 00:36:42.811 } 00:36:42.811 } 00:36:42.811 ] 00:36:42.811 }, 00:36:42.811 { 00:36:42.811 "subsystem": "sock", 00:36:42.811 "config": [ 00:36:42.811 { 00:36:42.811 "method": "sock_set_default_impl", 00:36:42.811 "params": { 00:36:42.811 "impl_name": "posix" 00:36:42.811 } 00:36:42.811 }, 00:36:42.811 { 00:36:42.811 "method": "sock_impl_set_options", 00:36:42.812 "params": { 00:36:42.812 "enable_ktls": false, 00:36:42.812 "enable_placement_id": 0, 00:36:42.812 "enable_quickack": false, 00:36:42.812 "enable_recv_pipe": true, 00:36:42.812 "enable_zerocopy_send_client": false, 00:36:42.812 "enable_zerocopy_send_server": true, 00:36:42.812 "impl_name": "ssl", 00:36:42.812 "recv_buf_size": 4096, 00:36:42.812 "send_buf_size": 4096, 00:36:42.812 "tls_version": 0, 00:36:42.812 "zerocopy_threshold": 0 00:36:42.812 } 00:36:42.812 }, 00:36:42.812 { 00:36:42.812 "method": "sock_impl_set_options", 00:36:42.812 "params": { 00:36:42.812 "enable_ktls": false, 00:36:42.812 "enable_placement_id": 0, 00:36:42.812 "enable_quickack": false, 00:36:42.812 "enable_recv_pipe": true, 00:36:42.812 "enable_zerocopy_send_client": false, 00:36:42.812 "enable_zerocopy_send_server": true, 00:36:42.812 "impl_name": "posix", 00:36:42.812 "recv_buf_size": 2097152, 00:36:42.812 "send_buf_size": 2097152, 00:36:42.812 "tls_version": 0, 00:36:42.812 "zerocopy_threshold": 0 00:36:42.812 } 00:36:42.812 } 00:36:42.812 ] 00:36:42.812 }, 00:36:42.812 { 00:36:42.812 "subsystem": "vmd", 00:36:42.812 "config": [] 00:36:42.812 }, 00:36:42.812 { 00:36:42.812 "subsystem": "accel", 00:36:42.812 "config": [ 00:36:42.812 { 00:36:42.812 "method": "accel_set_options", 00:36:42.812 "params": { 00:36:42.812 "buf_count": 2048, 00:36:42.812 "large_cache_size": 16, 00:36:42.812 "sequence_count": 2048, 00:36:42.812 "small_cache_size": 128, 00:36:42.812 "task_count": 2048 00:36:42.812 } 00:36:42.812 } 00:36:42.812 ] 00:36:42.812 }, 00:36:42.812 { 00:36:42.812 "subsystem": "bdev", 00:36:42.812 "config": [ 00:36:42.812 { 00:36:42.812 "method": "bdev_set_options", 00:36:42.812 "params": { 00:36:42.812 "bdev_auto_examine": true, 00:36:42.812 "bdev_io_cache_size": 256, 00:36:42.812 "bdev_io_pool_size": 65535, 00:36:42.812 "iobuf_large_cache_size": 16, 00:36:42.812 "iobuf_small_cache_size": 128 00:36:42.812 } 00:36:42.812 }, 00:36:42.812 { 00:36:42.812 "method": "bdev_raid_set_options", 00:36:42.812 "params": { 00:36:42.812 "process_max_bandwidth_mb_sec": 0, 00:36:42.812 "process_window_size_kb": 1024 00:36:42.812 } 00:36:42.812 }, 00:36:42.812 { 00:36:42.812 "method": "bdev_iscsi_set_options", 00:36:42.812 "params": { 00:36:42.812 "timeout_sec": 30 00:36:42.812 } 00:36:42.812 }, 00:36:42.812 { 00:36:42.812 "method": "bdev_nvme_set_options", 00:36:42.812 "params": { 00:36:42.812 "action_on_timeout": "none", 00:36:42.812 "allow_accel_sequence": false, 00:36:42.812 "arbitration_burst": 0, 00:36:42.812 "bdev_retry_count": 3, 00:36:42.812 "ctrlr_loss_timeout_sec": 0, 00:36:42.812 "delay_cmd_submit": true, 00:36:42.812 "dhchap_dhgroups": [ 00:36:42.812 "null", 00:36:42.812 "ffdhe2048", 00:36:42.812 "ffdhe3072", 00:36:42.812 "ffdhe4096", 00:36:42.812 "ffdhe6144", 00:36:42.812 "ffdhe8192" 00:36:42.812 ], 00:36:42.812 "dhchap_digests": [ 00:36:42.812 "sha256", 00:36:42.812 "sha384", 00:36:42.812 "sha512" 00:36:42.812 ], 00:36:42.812 "disable_auto_failback": false, 00:36:42.812 "fast_io_fail_timeout_sec": 0, 00:36:42.812 "generate_uuids": false, 00:36:42.812 "high_priority_weight": 0, 00:36:42.812 "io_path_stat": false, 00:36:42.812 "io_queue_requests": 512, 00:36:42.812 "keep_alive_timeout_ms": 10000, 00:36:42.812 "low_priority_weight": 0, 00:36:42.812 "medium_priority_weight": 0, 00:36:42.812 "nvme_adminq_poll_period_us": 10000, 00:36:42.812 "nvme_error_stat": false, 00:36:42.812 "nvme_ioq_poll_period_us": 0, 00:36:42.812 "rdma_cm_event_timeout_ms": 0, 00:36:42.812 "rdma_max_cq_size": 0, 00:36:42.812 "rdma_srq_size": 0, 00:36:42.812 "reconnect_delay_sec": 0, 00:36:42.812 "timeout_admin_us": 0, 00:36:42.812 "timeout_us": 0, 00:36:42.812 "transport_ack_timeout": 0, 00:36:42.812 "transport_retry_count": 4, 00:36:42.812 "transport_tos": 0 00:36:42.812 } 00:36:42.812 }, 00:36:42.812 { 00:36:42.812 "method": "bdev_nvme_attach_controller", 00:36:42.812 "params": { 00:36:42.812 "adrfam": "IPv4", 00:36:42.812 "ctrlr_loss_timeout_sec": 0, 00:36:42.812 "ddgst": false, 00:36:42.812 "fast_io_fail_timeout_sec": 0, 00:36:42.812 "hdgst": false, 00:36:42.812 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:42.812 "multipath": "multipath", 00:36:42.812 "name": "nvme0", 00:36:42.812 "prchk_guard": false, 00:36:42.812 "prchk_reftag": false, 00:36:42.812 "psk": "key0", 00:36:42.812 "reconnect_delay_sec": 0, 00:36:42.812 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:42.812 "traddr": "127.0.0.1", 00:36:42.812 "trsvcid": "4420", 00:36:42.812 "trtype": "TCP" 00:36:42.812 } 00:36:42.812 }, 00:36:42.812 { 00:36:42.812 "method": "bdev_nvme_set_hotplug", 00:36:42.812 "params": { 00:36:42.812 "enable": false, 00:36:42.812 "period_us": 100000 00:36:42.812 } 00:36:42.812 }, 00:36:42.812 { 00:36:42.812 "method": "bdev_wait_for_examine" 00:36:42.812 } 00:36:42.812 ] 00:36:42.812 }, 00:36:42.812 { 00:36:42.812 "subsystem": "nbd", 00:36:42.812 "config": [] 00:36:42.812 } 00:36:42.812 ] 00:36:42.812 }' 00:36:42.812 14:41:10 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:42.812 14:41:10 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:36:42.812 14:41:10 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:42.813 14:41:10 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:42.813 14:41:10 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:42.813 [2024-11-06 14:41:10.210126] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:36:42.813 [2024-11-06 14:41:10.210238] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121064 ] 00:36:42.813 [2024-11-06 14:41:10.363150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:43.071 [2024-11-06 14:41:10.453444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:43.330 [2024-11-06 14:41:10.738259] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:43.589 14:41:11 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:43.589 14:41:11 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:36:43.589 14:41:11 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:36:43.589 14:41:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:43.589 14:41:11 keyring_file -- keyring/file.sh@121 -- # jq length 00:36:43.848 14:41:11 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:36:43.848 14:41:11 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:36:43.848 14:41:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:43.848 14:41:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:43.848 14:41:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:43.848 14:41:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:43.848 14:41:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:43.848 14:41:11 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:36:43.848 14:41:11 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:36:43.848 14:41:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:43.848 14:41:11 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:43.848 14:41:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:43.848 14:41:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:43.848 14:41:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:44.107 14:41:11 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:36:44.107 14:41:11 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:36:44.107 14:41:11 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:36:44.107 14:41:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:36:44.366 14:41:11 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:36:44.366 14:41:11 keyring_file -- keyring/file.sh@1 -- # cleanup 00:36:44.366 14:41:11 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.CNh1CWMKWL /tmp/tmp.GH78l7JYNh 00:36:44.366 14:41:11 keyring_file -- keyring/file.sh@20 -- # killprocess 121064 00:36:44.366 14:41:11 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 121064 ']' 00:36:44.366 14:41:11 keyring_file -- common/autotest_common.sh@956 -- # kill -0 121064 00:36:44.366 14:41:11 keyring_file -- common/autotest_common.sh@957 -- # uname 00:36:44.366 14:41:11 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:44.366 14:41:11 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 121064 00:36:44.366 14:41:11 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:36:44.366 killing process with pid 121064 00:36:44.366 14:41:11 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:36:44.366 14:41:11 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 121064' 00:36:44.366 14:41:11 keyring_file -- common/autotest_common.sh@971 -- # kill 121064 00:36:44.366 Received shutdown signal, test time was about 1.000000 seconds 00:36:44.366 00:36:44.366 Latency(us) 00:36:44.366 [2024-11-06T14:41:12.001Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:44.366 [2024-11-06T14:41:12.001Z] =================================================================================================================== 00:36:44.366 [2024-11-06T14:41:12.001Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:36:44.366 14:41:11 keyring_file -- common/autotest_common.sh@976 -- # wait 121064 00:36:44.933 14:41:12 keyring_file -- keyring/file.sh@21 -- # killprocess 120571 00:36:44.933 14:41:12 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 120571 ']' 00:36:44.933 14:41:12 keyring_file -- common/autotest_common.sh@956 -- # kill -0 120571 00:36:44.933 14:41:12 keyring_file -- common/autotest_common.sh@957 -- # uname 00:36:44.933 14:41:12 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:44.933 14:41:12 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 120571 00:36:44.933 14:41:12 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:36:44.933 14:41:12 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:36:44.934 14:41:12 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 120571' 00:36:44.934 killing process with pid 120571 00:36:44.934 14:41:12 keyring_file -- common/autotest_common.sh@971 -- # kill 120571 00:36:44.934 14:41:12 keyring_file -- common/autotest_common.sh@976 -- # wait 120571 00:36:46.312 00:36:46.312 real 0m15.273s 00:36:46.312 user 0m34.778s 00:36:46.312 sys 0m2.922s 00:36:46.312 14:41:13 keyring_file -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:46.312 ************************************ 00:36:46.312 END TEST keyring_file 00:36:46.312 14:41:13 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:46.312 ************************************ 00:36:46.312 14:41:13 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:36:46.312 14:41:13 -- spdk/autotest.sh@290 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:36:46.312 14:41:13 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:36:46.312 14:41:13 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:46.312 14:41:13 -- common/autotest_common.sh@10 -- # set +x 00:36:46.312 ************************************ 00:36:46.312 START TEST keyring_linux 00:36:46.312 ************************************ 00:36:46.312 14:41:13 keyring_linux -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:36:46.312 Joined session keyring: 378614838 00:36:46.312 * Looking for test storage... 00:36:46.312 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:36:46.312 14:41:13 keyring_linux -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:46.312 14:41:13 keyring_linux -- common/autotest_common.sh@1691 -- # lcov --version 00:36:46.312 14:41:13 keyring_linux -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:46.312 14:41:13 keyring_linux -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:46.312 14:41:13 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:46.312 14:41:13 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:46.312 14:41:13 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:46.312 14:41:13 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:36:46.312 14:41:13 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:36:46.312 14:41:13 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:36:46.312 14:41:13 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:36:46.312 14:41:13 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:36:46.312 14:41:13 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:36:46.312 14:41:13 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:36:46.312 14:41:13 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:46.312 14:41:13 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:36:46.312 14:41:13 keyring_linux -- scripts/common.sh@345 -- # : 1 00:36:46.312 14:41:13 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:46.312 14:41:13 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:46.312 14:41:13 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:36:46.312 14:41:13 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:36:46.312 14:41:13 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:46.312 14:41:13 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:36:46.312 14:41:13 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:36:46.312 14:41:13 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:36:46.312 14:41:13 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:36:46.312 14:41:13 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:46.312 14:41:13 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:36:46.312 14:41:13 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:36:46.312 14:41:13 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:46.312 14:41:13 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:46.312 14:41:13 keyring_linux -- scripts/common.sh@368 -- # return 0 00:36:46.313 14:41:13 keyring_linux -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:46.313 14:41:13 keyring_linux -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:46.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:46.313 --rc genhtml_branch_coverage=1 00:36:46.313 --rc genhtml_function_coverage=1 00:36:46.313 --rc genhtml_legend=1 00:36:46.313 --rc geninfo_all_blocks=1 00:36:46.313 --rc geninfo_unexecuted_blocks=1 00:36:46.313 00:36:46.313 ' 00:36:46.313 14:41:13 keyring_linux -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:46.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:46.313 --rc genhtml_branch_coverage=1 00:36:46.313 --rc genhtml_function_coverage=1 00:36:46.313 --rc genhtml_legend=1 00:36:46.313 --rc geninfo_all_blocks=1 00:36:46.313 --rc geninfo_unexecuted_blocks=1 00:36:46.313 00:36:46.313 ' 00:36:46.313 14:41:13 keyring_linux -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:46.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:46.313 --rc genhtml_branch_coverage=1 00:36:46.313 --rc genhtml_function_coverage=1 00:36:46.313 --rc genhtml_legend=1 00:36:46.313 --rc geninfo_all_blocks=1 00:36:46.313 --rc geninfo_unexecuted_blocks=1 00:36:46.313 00:36:46.313 ' 00:36:46.313 14:41:13 keyring_linux -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:46.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:46.313 --rc genhtml_branch_coverage=1 00:36:46.313 --rc genhtml_function_coverage=1 00:36:46.313 --rc genhtml_legend=1 00:36:46.313 --rc geninfo_all_blocks=1 00:36:46.313 --rc geninfo_unexecuted_blocks=1 00:36:46.313 00:36:46.313 ' 00:36:46.313 14:41:13 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:36:46.313 14:41:13 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:36:46.313 14:41:13 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:36:46.313 14:41:13 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:46.313 14:41:13 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:46.313 14:41:13 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:46.313 14:41:13 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:46.313 14:41:13 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:46.313 14:41:13 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:46.313 14:41:13 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:46.313 14:41:13 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:46.313 14:41:13 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:46.313 14:41:13 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:46.574 14:41:13 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ca3beaa-4765-4780-96a0-c04fd97e7435 00:36:46.574 14:41:13 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=9ca3beaa-4765-4780-96a0-c04fd97e7435 00:36:46.574 14:41:13 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:46.574 14:41:13 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:46.574 14:41:13 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:36:46.574 14:41:13 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:46.574 14:41:13 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:36:46.574 14:41:13 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:36:46.574 14:41:13 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:46.574 14:41:13 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:46.574 14:41:13 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:46.574 14:41:13 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:46.574 14:41:13 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:46.574 14:41:13 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:46.574 14:41:13 keyring_linux -- paths/export.sh@5 -- # export PATH 00:36:46.574 14:41:13 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:46.574 14:41:13 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:36:46.574 14:41:13 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:46.574 14:41:13 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:46.574 14:41:13 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:46.574 14:41:13 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:46.574 14:41:13 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:46.574 14:41:13 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:46.574 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:46.574 14:41:13 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:46.574 14:41:13 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:46.574 14:41:13 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:46.574 14:41:13 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:46.574 14:41:13 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:46.574 14:41:13 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:46.574 14:41:13 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:36:46.574 14:41:13 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:36:46.574 14:41:13 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:36:46.574 14:41:13 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:36:46.574 14:41:13 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:46.574 14:41:13 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:36:46.574 14:41:13 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:46.574 14:41:13 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:46.574 14:41:13 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:36:46.574 14:41:13 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:46.575 14:41:13 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:46.575 14:41:13 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:36:46.575 14:41:13 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:46.575 14:41:13 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:36:46.575 14:41:13 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:36:46.575 14:41:13 keyring_linux -- nvmf/common.sh@733 -- # python - 00:36:46.575 14:41:13 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:36:46.575 /tmp/:spdk-test:key0 00:36:46.575 14:41:13 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:36:46.575 14:41:14 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:36:46.575 14:41:14 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:46.575 14:41:14 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:36:46.575 14:41:14 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:46.575 14:41:14 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:46.575 14:41:14 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:36:46.575 14:41:14 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:46.575 14:41:14 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:46.575 14:41:14 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:36:46.575 14:41:14 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:46.575 14:41:14 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:36:46.575 14:41:14 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:36:46.575 14:41:14 keyring_linux -- nvmf/common.sh@733 -- # python - 00:36:46.575 14:41:14 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:36:46.575 /tmp/:spdk-test:key1 00:36:46.575 14:41:14 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:36:46.575 14:41:14 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:36:46.575 14:41:14 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=121236 00:36:46.575 14:41:14 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 121236 00:36:46.575 14:41:14 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 121236 ']' 00:36:46.575 14:41:14 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:46.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:46.575 14:41:14 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:46.575 14:41:14 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:46.575 14:41:14 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:46.575 14:41:14 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:46.575 [2024-11-06 14:41:14.125419] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:36:46.575 [2024-11-06 14:41:14.125568] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121236 ] 00:36:46.834 [2024-11-06 14:41:14.287358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:46.834 [2024-11-06 14:41:14.370875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:47.402 14:41:14 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:47.402 14:41:14 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:36:47.402 14:41:14 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:36:47.402 14:41:14 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.402 14:41:14 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:47.402 [2024-11-06 14:41:14.959361] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:47.402 null0 00:36:47.402 [2024-11-06 14:41:14.991352] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:47.402 [2024-11-06 14:41:14.991508] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:47.402 14:41:15 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.402 14:41:15 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:36:47.402 841437222 00:36:47.402 14:41:15 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:36:47.402 641789449 00:36:47.402 14:41:15 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=121271 00:36:47.402 14:41:15 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 121271 /var/tmp/bperf.sock 00:36:47.402 14:41:15 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 121271 ']' 00:36:47.402 14:41:15 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:47.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:47.402 14:41:15 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:47.403 14:41:15 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:47.403 14:41:15 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:47.403 14:41:15 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:36:47.403 14:41:15 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:47.662 [2024-11-06 14:41:15.091459] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:36:47.662 [2024-11-06 14:41:15.091571] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121271 ] 00:36:47.662 [2024-11-06 14:41:15.249120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:47.921 [2024-11-06 14:41:15.356316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:48.489 14:41:15 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:48.489 14:41:15 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:36:48.489 14:41:15 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:36:48.489 14:41:15 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:36:48.747 14:41:16 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:36:48.747 14:41:16 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:49.007 14:41:16 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:49.007 14:41:16 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:49.266 [2024-11-06 14:41:16.766110] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:49.266 nvme0n1 00:36:49.266 14:41:16 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:36:49.266 14:41:16 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:36:49.266 14:41:16 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:49.266 14:41:16 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:49.266 14:41:16 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:49.266 14:41:16 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:49.525 14:41:17 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:36:49.525 14:41:17 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:49.525 14:41:17 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:36:49.525 14:41:17 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:36:49.525 14:41:17 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:49.525 14:41:17 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:49.525 14:41:17 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:36:49.783 14:41:17 keyring_linux -- keyring/linux.sh@25 -- # sn=841437222 00:36:49.783 14:41:17 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:36:49.783 14:41:17 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:49.783 14:41:17 keyring_linux -- keyring/linux.sh@26 -- # [[ 841437222 == \8\4\1\4\3\7\2\2\2 ]] 00:36:49.783 14:41:17 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 841437222 00:36:49.783 14:41:17 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:36:49.784 14:41:17 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:49.784 Running I/O for 1 seconds... 00:36:51.161 17981.00 IOPS, 70.24 MiB/s 00:36:51.161 Latency(us) 00:36:51.161 [2024-11-06T14:41:18.796Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:51.161 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:51.161 nvme0n1 : 1.01 17985.38 70.26 0.00 0.00 7090.13 5646.18 13812.97 00:36:51.161 [2024-11-06T14:41:18.796Z] =================================================================================================================== 00:36:51.161 [2024-11-06T14:41:18.796Z] Total : 17985.38 70.26 0.00 0.00 7090.13 5646.18 13812.97 00:36:51.161 { 00:36:51.161 "results": [ 00:36:51.161 { 00:36:51.161 "job": "nvme0n1", 00:36:51.161 "core_mask": "0x2", 00:36:51.161 "workload": "randread", 00:36:51.161 "status": "finished", 00:36:51.161 "queue_depth": 128, 00:36:51.161 "io_size": 4096, 00:36:51.161 "runtime": 1.006929, 00:36:51.161 "iops": 17985.3793067833, 00:36:51.161 "mibps": 70.25538791712226, 00:36:51.161 "io_failed": 0, 00:36:51.161 "io_timeout": 0, 00:36:51.161 "avg_latency_us": 7090.1298954253925, 00:36:51.161 "min_latency_us": 5646.178461538461, 00:36:51.161 "max_latency_us": 13812.972307692307 00:36:51.161 } 00:36:51.161 ], 00:36:51.161 "core_count": 1 00:36:51.161 } 00:36:51.161 14:41:18 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:51.161 14:41:18 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:51.161 14:41:18 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:36:51.161 14:41:18 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:36:51.161 14:41:18 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:51.161 14:41:18 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:51.161 14:41:18 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:51.161 14:41:18 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:51.420 14:41:18 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:36:51.420 14:41:18 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:51.420 14:41:18 keyring_linux -- keyring/linux.sh@23 -- # return 00:36:51.420 14:41:18 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:51.420 14:41:18 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:36:51.420 14:41:18 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:51.420 14:41:18 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:36:51.420 14:41:18 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:51.420 14:41:18 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:36:51.420 14:41:18 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:51.420 14:41:18 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:51.420 14:41:18 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:51.420 [2024-11-06 14:41:19.024186] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:51.420 [2024-11-06 14:41:19.024532] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002fb00 (107): Transport endpoint is not connected 00:36:51.420 [2024-11-06 14:41:19.025509] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002fb00 (9): Bad file descriptor 00:36:51.420 [2024-11-06 14:41:19.026503] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:36:51.420 [2024-11-06 14:41:19.026526] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:51.420 [2024-11-06 14:41:19.026537] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:36:51.420 [2024-11-06 14:41:19.026550] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:36:51.420 2024/11/06 14:41:19 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk::spdk-test:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:36:51.420 request: 00:36:51.420 { 00:36:51.420 "method": "bdev_nvme_attach_controller", 00:36:51.420 "params": { 00:36:51.420 "name": "nvme0", 00:36:51.420 "trtype": "tcp", 00:36:51.420 "traddr": "127.0.0.1", 00:36:51.420 "adrfam": "ipv4", 00:36:51.420 "trsvcid": "4420", 00:36:51.420 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:51.420 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:51.420 "prchk_reftag": false, 00:36:51.420 "prchk_guard": false, 00:36:51.420 "hdgst": false, 00:36:51.420 "ddgst": false, 00:36:51.420 "psk": ":spdk-test:key1", 00:36:51.420 "allow_unrecognized_csi": false 00:36:51.420 } 00:36:51.420 } 00:36:51.420 Got JSON-RPC error response 00:36:51.420 GoRPCClient: error on JSON-RPC call 00:36:51.420 14:41:19 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:36:51.420 14:41:19 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:51.420 14:41:19 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:51.420 14:41:19 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:51.420 14:41:19 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:36:51.420 14:41:19 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:51.420 14:41:19 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:36:51.420 14:41:19 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:36:51.420 14:41:19 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:36:51.420 14:41:19 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:51.420 14:41:19 keyring_linux -- keyring/linux.sh@33 -- # sn=841437222 00:36:51.421 14:41:19 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 841437222 00:36:51.421 1 links removed 00:36:51.421 14:41:19 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:51.421 14:41:19 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:36:51.421 14:41:19 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:36:51.679 14:41:19 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:36:51.679 14:41:19 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:36:51.679 14:41:19 keyring_linux -- keyring/linux.sh@33 -- # sn=641789449 00:36:51.679 14:41:19 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 641789449 00:36:51.679 1 links removed 00:36:51.679 14:41:19 keyring_linux -- keyring/linux.sh@41 -- # killprocess 121271 00:36:51.679 14:41:19 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 121271 ']' 00:36:51.679 14:41:19 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 121271 00:36:51.679 14:41:19 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:36:51.679 14:41:19 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:51.679 14:41:19 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 121271 00:36:51.679 14:41:19 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:36:51.679 14:41:19 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:36:51.679 killing process with pid 121271 00:36:51.679 14:41:19 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 121271' 00:36:51.679 Received shutdown signal, test time was about 1.000000 seconds 00:36:51.679 00:36:51.679 Latency(us) 00:36:51.679 [2024-11-06T14:41:19.314Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:51.679 [2024-11-06T14:41:19.314Z] =================================================================================================================== 00:36:51.679 [2024-11-06T14:41:19.315Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:51.680 14:41:19 keyring_linux -- common/autotest_common.sh@971 -- # kill 121271 00:36:51.680 14:41:19 keyring_linux -- common/autotest_common.sh@976 -- # wait 121271 00:36:52.246 14:41:19 keyring_linux -- keyring/linux.sh@42 -- # killprocess 121236 00:36:52.246 14:41:19 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 121236 ']' 00:36:52.246 14:41:19 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 121236 00:36:52.246 14:41:19 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:36:52.246 14:41:19 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:52.246 14:41:19 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 121236 00:36:52.246 14:41:19 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:36:52.246 14:41:19 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:36:52.246 killing process with pid 121236 00:36:52.246 14:41:19 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 121236' 00:36:52.246 14:41:19 keyring_linux -- common/autotest_common.sh@971 -- # kill 121236 00:36:52.246 14:41:19 keyring_linux -- common/autotest_common.sh@976 -- # wait 121236 00:36:53.185 00:36:53.185 real 0m7.049s 00:36:53.185 user 0m12.344s 00:36:53.185 sys 0m1.526s 00:36:53.185 14:41:20 keyring_linux -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:53.185 ************************************ 00:36:53.185 END TEST keyring_linux 00:36:53.185 ************************************ 00:36:53.185 14:41:20 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:53.446 14:41:20 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:36:53.446 14:41:20 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:36:53.446 14:41:20 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:36:53.446 14:41:20 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:36:53.446 14:41:20 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:36:53.446 14:41:20 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:36:53.446 14:41:20 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:36:53.446 14:41:20 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:36:53.446 14:41:20 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:36:53.446 14:41:20 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:36:53.446 14:41:20 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:36:53.446 14:41:20 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:36:53.447 14:41:20 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:36:53.447 14:41:20 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:36:53.447 14:41:20 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:36:53.447 14:41:20 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:36:53.447 14:41:20 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:36:53.447 14:41:20 -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:53.447 14:41:20 -- common/autotest_common.sh@10 -- # set +x 00:36:53.447 14:41:20 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:36:53.447 14:41:20 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:36:53.447 14:41:20 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:36:53.447 14:41:20 -- common/autotest_common.sh@10 -- # set +x 00:36:55.358 INFO: APP EXITING 00:36:55.358 INFO: killing all VMs 00:36:55.358 INFO: killing vhost app 00:36:55.358 INFO: EXIT DONE 00:36:55.620 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:55.620 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:36:55.620 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:36:56.193 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:56.455 Cleaning 00:36:56.455 Removing: /var/run/dpdk/spdk0/config 00:36:56.455 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:36:56.455 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:36:56.455 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:36:56.456 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:36:56.456 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:36:56.456 Removing: /var/run/dpdk/spdk0/hugepage_info 00:36:56.456 Removing: /var/run/dpdk/spdk1/config 00:36:56.456 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:36:56.456 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:36:56.456 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:36:56.456 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:36:56.456 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:36:56.456 Removing: /var/run/dpdk/spdk1/hugepage_info 00:36:56.456 Removing: /var/run/dpdk/spdk2/config 00:36:56.456 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:36:56.456 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:36:56.456 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:36:56.456 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:36:56.456 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:36:56.456 Removing: /var/run/dpdk/spdk2/hugepage_info 00:36:56.456 Removing: /var/run/dpdk/spdk3/config 00:36:56.456 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:36:56.456 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:36:56.456 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:36:56.456 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:36:56.456 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:36:56.456 Removing: /var/run/dpdk/spdk3/hugepage_info 00:36:56.456 Removing: /var/run/dpdk/spdk4/config 00:36:56.456 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:36:56.456 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:36:56.456 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:36:56.456 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:36:56.456 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:36:56.456 Removing: /var/run/dpdk/spdk4/hugepage_info 00:36:56.456 Removing: /dev/shm/nvmf_trace.0 00:36:56.456 Removing: /dev/shm/spdk_tgt_trace.pid58574 00:36:56.456 Removing: /var/run/dpdk/spdk0 00:36:56.456 Removing: /var/run/dpdk/spdk1 00:36:56.456 Removing: /var/run/dpdk/spdk2 00:36:56.456 Removing: /var/run/dpdk/spdk3 00:36:56.456 Removing: /var/run/dpdk/spdk4 00:36:56.456 Removing: /var/run/dpdk/spdk_pid100222 00:36:56.456 Removing: /var/run/dpdk/spdk_pid100885 00:36:56.456 Removing: /var/run/dpdk/spdk_pid100888 00:36:56.456 Removing: /var/run/dpdk/spdk_pid102962 00:36:56.456 Removing: /var/run/dpdk/spdk_pid103054 00:36:56.456 Removing: /var/run/dpdk/spdk_pid103145 00:36:56.456 Removing: /var/run/dpdk/spdk_pid103237 00:36:56.456 Removing: /var/run/dpdk/spdk_pid103402 00:36:56.456 Removing: /var/run/dpdk/spdk_pid103499 00:36:56.456 Removing: /var/run/dpdk/spdk_pid103589 00:36:56.456 Removing: /var/run/dpdk/spdk_pid103676 00:36:56.456 Removing: /var/run/dpdk/spdk_pid104078 00:36:56.456 Removing: /var/run/dpdk/spdk_pid104854 00:36:56.456 Removing: /var/run/dpdk/spdk_pid106268 00:36:56.456 Removing: /var/run/dpdk/spdk_pid106475 00:36:56.456 Removing: /var/run/dpdk/spdk_pid106766 00:36:56.456 Removing: /var/run/dpdk/spdk_pid107326 00:36:56.456 Removing: /var/run/dpdk/spdk_pid107722 00:36:56.456 Removing: /var/run/dpdk/spdk_pid110573 00:36:56.456 Removing: /var/run/dpdk/spdk_pid110616 00:36:56.456 Removing: /var/run/dpdk/spdk_pid110982 00:36:56.456 Removing: /var/run/dpdk/spdk_pid111027 00:36:56.456 Removing: /var/run/dpdk/spdk_pid111442 00:36:56.456 Removing: /var/run/dpdk/spdk_pid112015 00:36:56.456 Removing: /var/run/dpdk/spdk_pid112439 00:36:56.456 Removing: /var/run/dpdk/spdk_pid113538 00:36:56.456 Removing: /var/run/dpdk/spdk_pid114620 00:36:56.456 Removing: /var/run/dpdk/spdk_pid114739 00:36:56.456 Removing: /var/run/dpdk/spdk_pid114808 00:36:56.456 Removing: /var/run/dpdk/spdk_pid116429 00:36:56.456 Removing: /var/run/dpdk/spdk_pid116786 00:36:56.456 Removing: /var/run/dpdk/spdk_pid117136 00:36:56.456 Removing: /var/run/dpdk/spdk_pid117716 00:36:56.456 Removing: /var/run/dpdk/spdk_pid117723 00:36:56.456 Removing: /var/run/dpdk/spdk_pid118134 00:36:56.456 Removing: /var/run/dpdk/spdk_pid118292 00:36:56.456 Removing: /var/run/dpdk/spdk_pid118453 00:36:56.456 Removing: /var/run/dpdk/spdk_pid118554 00:36:56.456 Removing: /var/run/dpdk/spdk_pid118713 00:36:56.719 Removing: /var/run/dpdk/spdk_pid118822 00:36:56.719 Removing: /var/run/dpdk/spdk_pid119562 00:36:56.719 Removing: /var/run/dpdk/spdk_pid119598 00:36:56.719 Removing: /var/run/dpdk/spdk_pid119640 00:36:56.719 Removing: /var/run/dpdk/spdk_pid120034 00:36:56.719 Removing: /var/run/dpdk/spdk_pid120069 00:36:56.719 Removing: /var/run/dpdk/spdk_pid120107 00:36:56.719 Removing: /var/run/dpdk/spdk_pid120571 00:36:56.719 Removing: /var/run/dpdk/spdk_pid120602 00:36:56.719 Removing: /var/run/dpdk/spdk_pid121064 00:36:56.719 Removing: /var/run/dpdk/spdk_pid121236 00:36:56.719 Removing: /var/run/dpdk/spdk_pid121271 00:36:56.719 Removing: /var/run/dpdk/spdk_pid58372 00:36:56.719 Removing: /var/run/dpdk/spdk_pid58574 00:36:56.719 Removing: /var/run/dpdk/spdk_pid58850 00:36:56.719 Removing: /var/run/dpdk/spdk_pid58954 00:36:56.719 Removing: /var/run/dpdk/spdk_pid59000 00:36:56.719 Removing: /var/run/dpdk/spdk_pid59123 00:36:56.719 Removing: /var/run/dpdk/spdk_pid59153 00:36:56.719 Removing: /var/run/dpdk/spdk_pid59306 00:36:56.719 Removing: /var/run/dpdk/spdk_pid59580 00:36:56.719 Removing: /var/run/dpdk/spdk_pid59765 00:36:56.719 Removing: /var/run/dpdk/spdk_pid59876 00:36:56.719 Removing: /var/run/dpdk/spdk_pid59976 00:36:56.719 Removing: /var/run/dpdk/spdk_pid60091 00:36:56.719 Removing: /var/run/dpdk/spdk_pid60125 00:36:56.719 Removing: /var/run/dpdk/spdk_pid60167 00:36:56.719 Removing: /var/run/dpdk/spdk_pid60236 00:36:56.719 Removing: /var/run/dpdk/spdk_pid60334 00:36:56.719 Removing: /var/run/dpdk/spdk_pid60963 00:36:56.719 Removing: /var/run/dpdk/spdk_pid61039 00:36:56.719 Removing: /var/run/dpdk/spdk_pid61109 00:36:56.719 Removing: /var/run/dpdk/spdk_pid61137 00:36:56.719 Removing: /var/run/dpdk/spdk_pid61251 00:36:56.719 Removing: /var/run/dpdk/spdk_pid61279 00:36:56.719 Removing: /var/run/dpdk/spdk_pid61382 00:36:56.719 Removing: /var/run/dpdk/spdk_pid61410 00:36:56.719 Removing: /var/run/dpdk/spdk_pid61469 00:36:56.719 Removing: /var/run/dpdk/spdk_pid61499 00:36:56.719 Removing: /var/run/dpdk/spdk_pid61558 00:36:56.719 Removing: /var/run/dpdk/spdk_pid61588 00:36:56.719 Removing: /var/run/dpdk/spdk_pid61778 00:36:56.719 Removing: /var/run/dpdk/spdk_pid61809 00:36:56.719 Removing: /var/run/dpdk/spdk_pid61898 00:36:56.719 Removing: /var/run/dpdk/spdk_pid62391 00:36:56.719 Removing: /var/run/dpdk/spdk_pid62781 00:36:56.719 Removing: /var/run/dpdk/spdk_pid65661 00:36:56.719 Removing: /var/run/dpdk/spdk_pid65707 00:36:56.719 Removing: /var/run/dpdk/spdk_pid66082 00:36:56.719 Removing: /var/run/dpdk/spdk_pid66129 00:36:56.719 Removing: /var/run/dpdk/spdk_pid66552 00:36:56.719 Removing: /var/run/dpdk/spdk_pid67137 00:36:56.719 Removing: /var/run/dpdk/spdk_pid67581 00:36:56.719 Removing: /var/run/dpdk/spdk_pid68676 00:36:56.719 Removing: /var/run/dpdk/spdk_pid69772 00:36:56.719 Removing: /var/run/dpdk/spdk_pid69902 00:36:56.719 Removing: /var/run/dpdk/spdk_pid69978 00:36:56.719 Removing: /var/run/dpdk/spdk_pid71603 00:36:56.719 Removing: /var/run/dpdk/spdk_pid71972 00:36:56.719 Removing: /var/run/dpdk/spdk_pid79359 00:36:56.719 Removing: /var/run/dpdk/spdk_pid79794 00:36:56.719 Removing: /var/run/dpdk/spdk_pid80438 00:36:56.719 Removing: /var/run/dpdk/spdk_pid80949 00:36:56.719 Removing: /var/run/dpdk/spdk_pid80952 00:36:56.719 Removing: /var/run/dpdk/spdk_pid81019 00:36:56.719 Removing: /var/run/dpdk/spdk_pid81079 00:36:56.719 Removing: /var/run/dpdk/spdk_pid81141 00:36:56.719 Removing: /var/run/dpdk/spdk_pid81186 00:36:56.719 Removing: /var/run/dpdk/spdk_pid81189 00:36:56.719 Removing: /var/run/dpdk/spdk_pid81221 00:36:56.719 Removing: /var/run/dpdk/spdk_pid81260 00:36:56.719 Removing: /var/run/dpdk/spdk_pid81263 00:36:56.719 Removing: /var/run/dpdk/spdk_pid81322 00:36:56.719 Removing: /var/run/dpdk/spdk_pid81387 00:36:56.719 Removing: /var/run/dpdk/spdk_pid81448 00:36:56.719 Removing: /var/run/dpdk/spdk_pid81488 00:36:56.719 Removing: /var/run/dpdk/spdk_pid81497 00:36:56.719 Removing: /var/run/dpdk/spdk_pid81523 00:36:56.719 Removing: /var/run/dpdk/spdk_pid81829 00:36:56.719 Removing: /var/run/dpdk/spdk_pid81993 00:36:56.719 Removing: /var/run/dpdk/spdk_pid82259 00:36:56.719 Removing: /var/run/dpdk/spdk_pid87701 00:36:56.719 Removing: /var/run/dpdk/spdk_pid88198 00:36:56.719 Removing: /var/run/dpdk/spdk_pid88308 00:36:56.719 Removing: /var/run/dpdk/spdk_pid88473 00:36:56.719 Removing: /var/run/dpdk/spdk_pid88532 00:36:56.719 Removing: /var/run/dpdk/spdk_pid88586 00:36:56.719 Removing: /var/run/dpdk/spdk_pid88645 00:36:56.719 Removing: /var/run/dpdk/spdk_pid88830 00:36:56.719 Removing: /var/run/dpdk/spdk_pid88996 00:36:56.719 Removing: /var/run/dpdk/spdk_pid89295 00:36:56.719 Removing: /var/run/dpdk/spdk_pid89433 00:36:56.719 Removing: /var/run/dpdk/spdk_pid89701 00:36:56.719 Removing: /var/run/dpdk/spdk_pid89834 00:36:56.719 Removing: /var/run/dpdk/spdk_pid89971 00:36:56.719 Removing: /var/run/dpdk/spdk_pid90365 00:36:56.719 Removing: /var/run/dpdk/spdk_pid90840 00:36:56.719 Removing: /var/run/dpdk/spdk_pid90841 00:36:56.719 Removing: /var/run/dpdk/spdk_pid90842 00:36:56.981 Removing: /var/run/dpdk/spdk_pid91135 00:36:56.981 Removing: /var/run/dpdk/spdk_pid91409 00:36:56.981 Removing: /var/run/dpdk/spdk_pid91412 00:36:56.981 Removing: /var/run/dpdk/spdk_pid93829 00:36:56.981 Removing: /var/run/dpdk/spdk_pid94279 00:36:56.981 Removing: /var/run/dpdk/spdk_pid94651 00:36:56.981 Removing: /var/run/dpdk/spdk_pid95273 00:36:56.981 Removing: /var/run/dpdk/spdk_pid95276 00:36:56.981 Removing: /var/run/dpdk/spdk_pid95681 00:36:56.981 Removing: /var/run/dpdk/spdk_pid95696 00:36:56.981 Removing: /var/run/dpdk/spdk_pid95711 00:36:56.981 Removing: /var/run/dpdk/spdk_pid95751 00:36:56.981 Removing: /var/run/dpdk/spdk_pid95760 00:36:56.981 Removing: /var/run/dpdk/spdk_pid95900 00:36:56.981 Removing: /var/run/dpdk/spdk_pid95913 00:36:56.981 Removing: /var/run/dpdk/spdk_pid96022 00:36:56.981 Removing: /var/run/dpdk/spdk_pid96026 00:36:56.981 Removing: /var/run/dpdk/spdk_pid96135 00:36:56.981 Removing: /var/run/dpdk/spdk_pid96144 00:36:56.981 Removing: /var/run/dpdk/spdk_pid96654 00:36:56.981 Removing: /var/run/dpdk/spdk_pid96695 00:36:56.981 Removing: /var/run/dpdk/spdk_pid96845 00:36:56.981 Removing: /var/run/dpdk/spdk_pid96953 00:36:56.981 Removing: /var/run/dpdk/spdk_pid97402 00:36:56.981 Removing: /var/run/dpdk/spdk_pid97658 00:36:56.981 Removing: /var/run/dpdk/spdk_pid98206 00:36:56.982 Removing: /var/run/dpdk/spdk_pid98855 00:36:56.982 Clean 00:36:56.982 14:41:24 -- common/autotest_common.sh@1451 -- # return 0 00:36:56.982 14:41:24 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:36:56.982 14:41:24 -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:56.982 14:41:24 -- common/autotest_common.sh@10 -- # set +x 00:36:56.982 14:41:24 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:36:56.982 14:41:24 -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:56.982 14:41:24 -- common/autotest_common.sh@10 -- # set +x 00:36:56.982 14:41:24 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:36:56.982 14:41:24 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:36:56.982 14:41:24 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:36:56.982 14:41:24 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:36:56.982 14:41:24 -- spdk/autotest.sh@394 -- # hostname 00:36:56.982 14:41:24 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:36:57.244 geninfo: WARNING: invalid characters removed from testname! 00:37:23.850 14:41:50 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:26.393 14:41:53 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:28.931 14:41:56 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:32.282 14:41:59 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:34.830 14:42:02 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:37.375 14:42:04 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:39.287 14:42:06 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:37:39.287 14:42:06 -- spdk/autorun.sh@1 -- $ timing_finish 00:37:39.287 14:42:06 -- common/autotest_common.sh@736 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:37:39.287 14:42:06 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:37:39.287 14:42:06 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:37:39.287 14:42:06 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:37:39.287 + [[ -n 4988 ]] 00:37:39.287 + sudo kill 4988 00:37:39.299 [Pipeline] } 00:37:39.313 [Pipeline] // timeout 00:37:39.317 [Pipeline] } 00:37:39.330 [Pipeline] // stage 00:37:39.335 [Pipeline] } 00:37:39.347 [Pipeline] // catchError 00:37:39.357 [Pipeline] stage 00:37:39.359 [Pipeline] { (Stop VM) 00:37:39.371 [Pipeline] sh 00:37:39.657 + vagrant halt 00:37:42.212 ==> default: Halting domain... 00:37:46.445 [Pipeline] sh 00:37:46.765 + vagrant destroy -f 00:37:49.315 ==> default: Removing domain... 00:37:49.589 [Pipeline] sh 00:37:49.873 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest/output 00:37:49.884 [Pipeline] } 00:37:49.898 [Pipeline] // stage 00:37:49.904 [Pipeline] } 00:37:49.919 [Pipeline] // dir 00:37:49.924 [Pipeline] } 00:37:49.938 [Pipeline] // wrap 00:37:49.944 [Pipeline] } 00:37:49.957 [Pipeline] // catchError 00:37:49.967 [Pipeline] stage 00:37:49.969 [Pipeline] { (Epilogue) 00:37:49.983 [Pipeline] sh 00:37:50.272 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:37:55.562 [Pipeline] catchError 00:37:55.564 [Pipeline] { 00:37:55.578 [Pipeline] sh 00:37:55.863 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:37:55.863 Artifacts sizes are good 00:37:55.873 [Pipeline] } 00:37:55.885 [Pipeline] // catchError 00:37:55.894 [Pipeline] archiveArtifacts 00:37:55.901 Archiving artifacts 00:37:56.033 [Pipeline] cleanWs 00:37:56.045 [WS-CLEANUP] Deleting project workspace... 00:37:56.045 [WS-CLEANUP] Deferred wipeout is used... 00:37:56.052 [WS-CLEANUP] done 00:37:56.053 [Pipeline] } 00:37:56.068 [Pipeline] // stage 00:37:56.073 [Pipeline] } 00:37:56.085 [Pipeline] // node 00:37:56.090 [Pipeline] End of Pipeline 00:37:56.119 Finished: SUCCESS